[issue6594] json C serializer performance tied to structure depth on some systems

2010-11-30 Thread Shawn

Shawn  added the comment:

I specifically mentioned *SPARC* as the performance problem area, but the reply 
about "0.5s to dump" fails to mention on what platform they tested

My problem is not "undiagnosable".  I'll be happy to provide you with even more 
data files.  But I believe that there is a problem here on some architectures 
for reasons other than those of simple differences in single-threaded 
performance that could be accounted to processor architecture.

As an example of something that makes a noticeable difference on SPARC systems 
I've checked:

178 # could accelerate with writelines in some versions of Python, at
179 # a debuggability cost
180 for chunk in iterable:
181 fp.write(chunk)

Changing that to use writelines() is a significant win.  You go from over a 
million calls to write (for small bits as simple as a single character such as 
'{') to one single call to writelines() with an iterable.

The recursive call structure of the json code also adds significant overhead on 
some architectures.

What's "undiagnosable" here is the response to the issue I reported -- it 
provides no information about the platform that was tested or how the testing 
was done.

My testing was done by reading the attached file using json, and then timing 
the results of writing it back out (to /tmp mind you, which is memory-backed on 
my OS platform, so no disk I/O was involved, I've also checked writing to a 
cStringIO object).

--

___
Python tracker 
<http://bugs.python.org/issue6594>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7969] shutil.copytree error handling non-standard and partially broken

2010-02-19 Thread Shawn

New submission from Shawn :

The error handling present in the implementation of shutil.copytree in python 
2.6.4 (and perhaps other versions) is non-standard and partially broken.

In particular, I'm unable to find any pydoc documentation that indicates that 
when copytree raises shutil.Error, that the error instead of the standard 
2-tuple or 3-tuple was initialised with a list of entries.

This means that callers catching EnvironmentError will be in for a surprise 
whenever they check e.args and find a tuple containing a list instead of just a 
tuple.

Callers will also be disappointed to find that the entries in the list may be 
tuples or strings due to what looks like a bug in copystat error handling (it's 
using extend instead of append).

I could possibly live with this behaviour somewhat if it was actually 
documented and consistent since shutil.Error can be caught specifically instead.

It's also really unfortunate that the tuples that are stored here aren't the 
actual exception objects of the errors encountered so callers cannot perform 
more granular error handling (permissions exceptions, etc.).

I'd like to request that this function:

* be fixed to append copystat errors correctly

* have shutil.Error documentation indicate the special args format and explain 
how it might be parsed

* consider having it record the exception objects instead of the exception 
message

* suggest that the default str() output for shutil.Error be improved

--
components: Library (Lib)
messages: 99597
nosy: swalker
severity: normal
status: open
title: shutil.copytree error handling non-standard and partially broken
type: behavior
versions: Python 2.6

___
Python tracker 
<http://bugs.python.org/issue7969>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40337] builtins.RuntimeError: Caught RuntimeError in pin memory thread for device 0.

2020-04-20 Thread shawn

New submission from shawn :

 File "D:\yolov3\train.py", line 430, in 
  train() # train normally
 File "D:\yolov3\train.py", line 236, in train
  for i, (imgs, targets, paths, _) in pbar: # batch 
-
File "D:\Programs\Python\Python38\Lib\site-packages\tqdm\std.py", line 1127, in 
__iter__
  for obj in iterable:
File 
"D:\Programs\Python\Python38\Lib\site-packages\torch\utils\data\dataloader.py", 
line 345, in __next__
  data = self._next_data()
File 
"D:\Programs\Python\Python38\Lib\site-packages\torch\utils\data\dataloader.py", 
line 856, in _next_data
  return self._process_data(data)
File 
"D:\Programs\Python\Python38\Lib\site-packages\torch\utils\data\dataloader.py", 
line 881, in _process_data
  data.reraise()
File "D:\Programs\Python\Python38\Lib\site-packages\torch\_utils.py", line 394, 
in reraise
  raise self.exc_type(msg)

builtins.RuntimeError: Caught RuntimeError in pin memory thread for device 0.
Original Traceback (most recent call last):
File 
"D:\Programs\Python\Python38\lib\site-packages\torch\utils\data\_utils\pin_memory.py",
 line 31, in _pin_memory_loop
data = pin_memory(data)
File 
"D:\Programs\Python\Python38\lib\site-packages\torch\utils\data\_utils\pin_memory.py",
 line 55, in pin_memory
return [pin_memory(sample) for sample in data]
File 
"D:\Programs\Python\Python38\lib\site-packages\torch\utils\data\_utils\pin_memory.py",
 line 55, in 
return [pin_memory(sample) for sample in data]
File 
"D:\Programs\Python\Python38\lib\site-packages\torch\utils\data\_utils\pin_memory.py",
 line 47, in pin_memory
return data.pin_memory()
 ... (truncated)

--
messages: 366829
nosy: shawn
priority: normal
severity: normal
status: open
title: builtins.RuntimeError: Caught RuntimeError in pin memory thread for 
device 0.
versions: Python 3.8

___
Python tracker 
<https://bugs.python.org/issue40337>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14157] time.strptime without a year fails on Feb 29

2012-02-29 Thread Shawn

Shawn  added the comment:

I'm seeing this when a year *is* specified with Python 2.6 and 2.7:


import time
time.strptime("20090229T184823Z", "%Y%m%dT%H%M%SZ")
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.6/_strptime.py", line 454, in _strptime_time
return _strptime(data_string, format)[0]
  File "/usr/lib/python2.6/_strptime.py", line 440, in _strptime
datetime_date(year, 1, 1).toordinal() + 1
ValueError: day is out of range for month


import datetime
datetime.datetime.strptime("20090229T184823Z", "%Y%m%dT%H%M%SZ")
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.6/_strptime.py", line 440, in _strptime
datetime_date(year, 1, 1).toordinal() + 1
ValueError: day is out of range for month

--
nosy: +swalker

___
Python tracker 
<http://bugs.python.org/issue14157>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14157] time.strptime without a year fails on Feb 29

2012-02-29 Thread Shawn

Shawn  added the comment:

I'm an idiot; nevermind my comment.  The original date was bogus.

--

___
Python tracker 
<http://bugs.python.org/issue14157>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21412] core dump in PyThreadState_Get when built --with-pymalloc

2014-05-01 Thread Shawn

Changes by Shawn :


--
nosy: +swalker

___
Python tracker 
<http://bugs.python.org/issue21412>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13405] Add DTrace probes

2014-05-02 Thread Shawn

Changes by Shawn :


--
nosy: +swalker

___
Python tracker 
<http://bugs.python.org/issue13405>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue22148] frozen.c should #include instead of "importlib.h"

2014-08-05 Thread Shawn

Changes by Shawn :


--
nosy: +swalker

___
Python tracker 
<http://bugs.python.org/issue22148>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6594] json C serializer performance tied to structure depth on some systems

2009-07-28 Thread Shawn

New submission from Shawn :

The json serializer's performance (when using the C speedups) appears to
be tied to the depth of the structure being serialized on some systems.
 In particular, dict structure that are more than a few levels deep,
especially when they content mixed values (lists, strings, and other
dicts) causes severe serialization penalties (in the neighborhood of an
extra 20-30 seconds) on some systems.

On SPARC systems, this is very likely because of the recursive call
structure that the C speedups serializer uses which doesn't work well
with SPARC due to register windows.

On x86 systems, recursive call structures are cheap, so this doesn't
appear to be an issue there.

SPARC systems with higher amounts of memory bandwidth don't suffer as
badly from this (such as Niagra T1000, T2000, etc. systems), but older
UltraSparc systems are especially prone to performance issues.

--
components: Library (Lib)
messages: 91015
nosy: swalker
severity: normal
status: open
title: json C serializer performance tied to structure depth on some systems
type: performance
versions: Python 2.6

___
Python tracker 
<http://bugs.python.org/issue6594>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6594] json C serializer performance tied to structure depth on some systems

2009-08-05 Thread Shawn

Shawn  added the comment:

As I mentioned, there's also noticeable performance penalties on recent
SPARC systems, such as Niagra T1000, T2000, etc.  The degradation is
just less obvious (a 10-15 second penalty instead of a 20 or 30 second
penalty).  While x86 enjoys no penalty at all (in my testing so far).

Here's an example of the data structure:

{
  "packages":{
"package-name-1":{
  "publisher":"publisher-name-1",
  "versions":[
[
  "0.5.11,5.11-0.86:20080422T230436Z",
  {
"depend":{
  "require":[
{
  "fmri":"foo"
},   
{
  "fmri":"bar"
}
  ],   
  "optional":[
{
  "fmri":"baz"
},   
{
  "fmri":"quux"
}
  ]
}
  }
],   
  ]
}
  }
}

Now imagine that there are 45,000 package-name-x entries in the
structure above, and that basically replicates what I'm writing.

If I turn the above structure into a list of lists instead, the penalty
is significantly reduced (halved at least).  If I flatten the stack even
farther, the penalty is essentially gone.  The greater the depth of the
data structure, the greater the penalty.

As for priority, I wouldn't call this "end of the world", but it does
create some unfortunate surprises for developers that rely on the json
module.  Given the long service lifetimes of SPARC systems (due to cost
:)), I suspect this would be of benefit for a long time to come.

--

___
Python tracker 
<http://bugs.python.org/issue6594>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6594] json C serializer performance tied to structure depth on some systems

2009-08-06 Thread Shawn

Shawn  added the comment:

First, I want to apologise for not providing more detail initially. 
Notably, one thing you may want to be aware of is that I'm using python
2.4.4 with the latest version of simplejson.  So my timings and
assumptions here are based on the fact that simplejson was adopted as
the 'json' module in python, and I filed the bug here as it appeared
that is where bugs are being tracked for the json module.

To answer your questions though, no, I can't say with certainty that
recursion depth is the issue.  That's just a theory proposed by a
developer intimately familiar with SPARC architecture, who said register
windows on SPARC tend to cause recursive call structures to execute
poorly.  It also seemed to play itself out empirically throughout
testing I performed where any reduction in the depth of the structure
would shave seconds off the write times on the SPARC systems I tested.

I'm also willing to try many of the other things you listed, but I will
have to get back to you on that as I have a project coming due soon.

With that said, I can provide sample data soon, and will do so.  I'll
attach the resulting gzip'd JSON file to make it easy to read and dump.

I would also note that:

* I have tried serialising using cStringIO, which made no significant
difference in performance.

* I have tried different memory allocators, which only seemed to make
things slower, or made little difference.

* Writing roughly the same amount of data (in terms of megabytes), but
in a flatter structure, also increased the performance of the serializer.

* In my testing, it seemed dict serialisation in particular was
problematic from a performance standpoint.

* If I recall correctly from the profile I did, iterencode_dict was
where most of the time was eaten, but I can redo the profile for a more
accurate analysis.

As for Antoine's comments:

I'd like to believe Python is very useful software, and any platform it
it runs on means that the respective market capitalization of the
platform is irrelevant; better performing Python is always good.

--
versions:  -Python 3.2

___
Python tracker 
<http://bugs.python.org/issue6594>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6594] json C serializer performance tied to structure depth on some systems

2009-12-07 Thread Shawn

Shawn  added the comment:

The attached patch doubles write times for my particular case when
applied to simplejson trunk using python 2.6.2.  Not good.

--

___
Python tracker 
<http://bugs.python.org/issue6594>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6594] json C serializer performance tied to structure depth on some systems

2009-12-07 Thread Shawn

Shawn  added the comment:

You are right, an environment anomaly let me to falsely believe that
this had somehow affected encoding performance.

I had repeated the test many times with and without the patch using
simplejson trunk and wrongly concluded that the patch was to blame.

After correcting the environment, write performance returned to normal.

This patch seems to perform roughly the same for my decode cases, but
uses about 10-20MB less memory.  My needs are far less than that of the
other poster.

However, this bug is about the serializer (encoder).  So perhaps the
decode performance patch should be a separate bug?

--

___
Python tracker 
<http://bugs.python.org/issue6594>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6594] json C serializer performance tied to structure depth on some systems

2009-12-07 Thread Shawn

Shawn  added the comment:

I've attached a sample JSON file that is much slower to write out on
some systems as described in the initial comment.

If you were to restructure the contents of this file into more of a tree
structure instead of the flat array structure it uses now, you will
notice that as the depth increases, serializer performance decreases
significantly.

--
Added file: http://bugs.python.org/file15475/catalog.dependency.C.gz

___
Python tracker 
<http://bugs.python.org/issue6594>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue23287] ctypes.util.find_library needlessly call crle on Solaris

2017-01-28 Thread Shawn

Shawn added the comment:

Could we get someone to evaluate this please?

--
nosy: +swalker

___
Python tracker 
<http://bugs.python.org/issue23287>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue26414] os.defpath too permissive

2016-02-23 Thread Shawn

Changes by Shawn :


--
nosy: +swalker

___
Python tracker 
<http://bugs.python.org/issue26414>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1727780] 64/32-bit issue when unpickling random.Random

2007-09-17 Thread Shawn Ligocki

Shawn Ligocki added the comment:

I've got a patch! The problem was that the state was being cast from a
C-type unsigned long to a long.

On 32-bit machines this makes large 32-bit longs negative.
On 64-bit machines this preserves the sign of 32-bit values (because
they are stored in 64-bit longs).

My patch returns the values with PyLong_FromUnsignedLong() instead of
PyInt_FromLong(), therefore there is no casting to long and both 32-bit
and 64-bit machines produce the same result.

I added code to read states from the old (buggy) version and decypher it
appropriately (from either 32-bit or 64-bit source!). In other words,
old pickles can now be opened on either architecture with the new patch.

This patch is taken from the svn head, but also works on Python 2.5.1 .

I haven't tested this patch fully on 64-bit machine yet. I'll let you
know when I have.

Cheers,
-Shawn

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1727780>
_Index: Lib/random.py
===
--- Lib/random.py	(revision 58178)
+++ Lib/random.py	(working copy)
@@ -83,7 +83,7 @@
 
 """
 
-VERSION = 2 # used by getstate/setstate
+VERSION = 3 # used by getstate/setstate
 
 def __init__(self, x=None):
 """Initialize an instance.
@@ -120,9 +120,20 @@
 def setstate(self, state):
 """Restore internal state from object returned by getstate()."""
 version = state[0]
-if version == 2:
+if version == 3:
 version, internalstate, self.gauss_next = state
 super(Random, self).setstate(internalstate)
+elif version == 2:
+version, internalstate, self.gauss_next = state
+# In version 2, the state was saved as signed ints, which causes
+#   inconsistencies between 32/64-bit systems. The state is
+#   really unsigned 32-bit ints, so we convert negative ints from
+#   version 2 to positive longs for version 3.
+try:
+internalstate = tuple( long(x) % (2**32) for x in internalstate )
+except ValueError, e:
+raise TypeError, e
+super(Random, self).setstate(internalstate)
 else:
 raise ValueError("state with version %s passed to "
  "Random.setstate() of version %s" %
Index: Modules/_randommodule.c
===
--- Modules/_randommodule.c	(revision 58178)
+++ Modules/_randommodule.c	(working copy)
@@ -319,12 +319,12 @@
 	if (state == NULL)
 		return NULL;
 	for (i=0; istate[i]));
+		element = PyLong_FromUnsignedLong(self->state[i]);
 		if (element == NULL)
 			goto Fail;
 		PyTuple_SET_ITEM(state, i, element);
 	}
-	element = PyInt_FromLong((long)(self->index));
+	element = PyLong_FromLong((long)(self->index));
 	if (element == NULL)
 		goto Fail;
 	PyTuple_SET_ITEM(state, i, element);
@@ -339,7 +339,8 @@
 random_setstate(RandomObject *self, PyObject *state)
 {
 	int i;
-	long element;
+	unsigned long element;
+	long index;
 
 	if (!PyTuple_Check(state)) {
 		PyErr_SetString(PyExc_TypeError,
@@ -353,16 +354,16 @@
 	}
 
 	for (i=0; istate[i] = (unsigned long)element;
+		self->state[i] = element & 0xUL; /* Make sure we get sane state */
 	}
 
-	element = PyInt_AsLong(PyTuple_GET_ITEM(state, i));
-	if (element == -1 && PyErr_Occurred())
+	index = PyLong_AsLong(PyTuple_GET_ITEM(state, i));
+	if (index == -1 && PyErr_Occurred())
 		return NULL;
-	self->index = (int)element;
+	self->index = (int)index;
 
 	Py_INCREF(Py_None);
 	return Py_None;
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1727780] 64/32-bit issue when unpickling random.Random

2007-09-17 Thread Shawn Ligocki

Shawn Ligocki added the comment:

Yep, tested it on a 64-bit machine and 2 32-bit machines and back and
forth between them. It seems to resolve the problem.

_
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1727780>
_
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10860] Handle empty port after port delimiter in httplib

2011-10-18 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

Great! Glad it landed :)

--

___
Python tracker 
<http://bugs.python.org/issue10860>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13489] collections.Counter doc does not list added version

2011-11-26 Thread Shawn Ligocki

New submission from Shawn Ligocki :

collections.Counter doc does not list added version:

http://docs.python.org/library/collections.html

It appears to only have been added in 2.7 (while the rest of the doc says it is 
valid since 2.4)

--
assignee: docs@python
components: Documentation
messages: 148443
nosy: docs@python, sligocki
priority: normal
severity: normal
status: open
title: collections.Counter doc does not list added version
versions: Python 2.6, Python 2.7

___
Python tracker 
<http://bugs.python.org/issue13489>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13489] collections.Counter doc does not list added version

2011-11-26 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

Ah, I see, it seems like that would be better suited directly after the section 
title, don't you?

--

___
Python tracker 
<http://bugs.python.org/issue13489>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13489] collections.Counter doc does not list added version

2011-11-29 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

It doesn't seem like the styling is the issue, but the placement. You say that 
the standard style is to put this at the end of the section, is there somewhere 
it would be appropriate to bring this up for discussion? I think it would be 
much more intuitive if it was always placed right after the section name.

--
keywords: +patch
Added file: http://bugs.python.org/file23811/collections.diff

___
Python tracker 
<http://bugs.python.org/issue13489>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10860] urllib2 crashes on valid URL

2011-01-07 Thread Shawn Ligocki

New submission from Shawn Ligocki :

urllib2 crashes with stack trace on legal URL http://118114.cn

Transcript:

Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) 
[GCC 4.4.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib2
>>> urllib2.urlopen("http://118114.cn";)
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
  File "/usr/lib/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
  File "/usr/lib/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.6/urllib2.py", line 429, in error
result = self._call_chain(*args)
  File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
  File "/usr/lib/python2.6/urllib2.py", line 605, in http_error_302
return self.parent.open(new, timeout=req.timeout)
  File "/usr/lib/python2.6/urllib2.py", line 391, in open
response = self._open(req, data)
  File "/usr/lib/python2.6/urllib2.py", line 409, in _open
'_open', req)
  File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
  File "/usr/lib/python2.6/urllib2.py", line 1161, in http_open
return self.do_open(httplib.HTTPConnection, req)
  File "/usr/lib/python2.6/urllib2.py", line 1107, in do_open
h = http_class(host, timeout=req.timeout) # will parse host:port
  File "/usr/lib/python2.6/httplib.py", line 657, in __init__
self._set_hostport(host, port)
  File "/usr/lib/python2.6/httplib.py", line 682, in _set_hostport
raise InvalidURL("nonnumeric port: '%s'" % host[i+1:])
httplib.InvalidURL: nonnumeric port: ''
>>> 


I think the problem is that "http://118114.cn"; says it redirects to 
"http://www.118114.cn:";, but it seems like urllib2 should be able to deal with 
that or at least report back a more useful error message.

$ nc 118114.cn 80
GET / HTTP/1.1
Host: 118114.cn   
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) 
Gecko/20101206 Ubuntu/10.04 (lucid) Firefox/3.6.13

HTTP/1.1 301 Moved Permanently
Server: nginx/0.7.64
Date: Fri, 07 Jan 2011 19:06:32 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Keep-Alive: timeout=60
Location: http://www.118114.cn:


301 Moved Permanently

301 Moved Permanently
nginx/0.7.64



--
components: Library (Lib)
messages: 125687
nosy: sligocki
priority: normal
severity: normal
status: open
title: urllib2 crashes on valid URL
type: crash
versions: Python 2.6

___
Python tracker 
<http://bugs.python.org/issue10860>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10861] urllib2 sporadically falsely claims infinite redirect

2011-01-07 Thread Shawn Ligocki

New submission from Shawn Ligocki :

urllib2 sporadically falsely claims that http://www.bankofamerica.com/ has 
infinite redirect:


$ python -c 'import urllib2; print 
urllib2.urlopen("http://www.bankofamerica.com/";).geturl()'
https://www.bankofamerica.com/

$ python -c 'import urllib2; print 
urllib2.urlopen("http://www.bankofamerica.com/";).geturl()'
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout)
  File "/usr/lib/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
  File "/usr/lib/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.6/urllib2.py", line 429, in error
result = self._call_chain(*args)
  File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
  File "/usr/lib/python2.6/urllib2.py", line 605, in http_error_302
return self.parent.open(new, timeout=req.timeout)
  File "/usr/lib/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
  File "/usr/lib/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.6/urllib2.py", line 429, in error
result = self._call_chain(*args)
  File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
  File "/usr/lib/python2.6/urllib2.py", line 605, in http_error_302
return self.parent.open(new, timeout=req.timeout)
  File "/usr/lib/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
  File "/usr/lib/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.6/urllib2.py", line 429, in error
result = self._call_chain(*args)
  File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
  File "/usr/lib/python2.6/urllib2.py", line 605, in http_error_302
return self.parent.open(new, timeout=req.timeout)
  File "/usr/lib/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
  File "/usr/lib/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.6/urllib2.py", line 429, in error
result = self._call_chain(*args)
  File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
  File "/usr/lib/python2.6/urllib2.py", line 605, in http_error_302
return self.parent.open(new, timeout=req.timeout)
  File "/usr/lib/python2.6/urllib2.py", line 397, in open
response = meth(req, response)
  File "/usr/lib/python2.6/urllib2.py", line 510, in http_response
'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.6/urllib2.py", line 429, in error
result = self._call_chain(*args)
  File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain
result = func(*args)
  File "/usr/lib/python2.6/urllib2.py", line 595, in http_error_302
self.inf_msg + msg, headers, fp)
urllib2.HTTPError: HTTP Error 302: The HTTP server returned a redirect error 
that would lead to an infinite loop.
The last 30x error message was:
Found



Since it is sporadic, it could just be a problem with bankofamerica.com's 
servers. Is there an easy way to see what response urllib2 got that made it 
unhappy?

--
components: Library (Lib)
messages: 125693
nosy: sligocki
priority: normal
severity: normal
status: open
title: urllib2 sporadically falsely claims infinite redirect
versions: Python 2.6

___
Python tracker 
<http://bugs.python.org/issue10861>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10860] Handle empty port after port delimiter in httplib

2011-01-07 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

Sure, I can work on a patch.

Should an empty port default to 80? In other words does "http://foo.com/"; == 
"http://foo.com:/";?

--

___
Python tracker 
<http://bugs.python.org/issue10860>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10861] urllib2 sporadically falsely claims infinite redirect

2011-01-07 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

Ahha, what a mess, thanks for investigating! I agree, this is bankofamerica's 
problem.

--

___
Python tracker 
<http://bugs.python.org/issue10861>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10860] Handle empty port after port delimiter in httplib

2011-01-07 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

Here's a patch for 2.7 (from the hg checkout 
http://code.python.org/hg/branches/release2.7-maint/)

How does it look? Apparently there was already a testcase for "www.python.org:" 
failing!

--
keywords: +patch
Added file: http://bugs.python.org/file20308/issue.10860.patch

___
Python tracker 
<http://bugs.python.org/issue10860>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7076] Documentation add note about SystemRandom

2010-01-21 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

ping

Please look at the last patch. It's very simple and would be helpful. This is 
not very complicated and shouldn't take months to consider.

--

___
Python tracker 
<http://bugs.python.org/issue7076>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2444] Adding __iter__ to class Values of module optparse

2008-04-02 Thread Shawn Morel

Shawn Morel <[EMAIL PROTECTED]> added the comment:

gpolo: The argument still doesn't hold. As you point out, it's the 
Values class output from __str__ and other behaviour that is being un-
pythonic and leading you to believe it's a dictionary. Adding the 
__itter__ method would only make this worse. Then someone else would 
surely ask to have another __*__ method added since dictionaries support 
it but values don't.

The question then is one for optik. Why doesn't values simply inherit 
from dict and why does it insist on using __setattr__ rather than 
actually behaving completely like a dictionary. I know I was completely 
surprised by the following:

>>> (opts, args) = parser.parse_args(values={})
>>> print opts
{}
>>>

--
nosy: +shawnmorel

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2444>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4645] configparser DEFAULT

2008-12-12 Thread Shawn Ashlee

New submission from Shawn Ashlee :

using .add_section() and .set() for the DEFAULT section adds it twice:

[u...@srv ~]$ cat test_configparser.py 
#!/usr/bin/env python

import ConfigParser

a = ConfigParser.SafeConfigParser()

# borked
a.add_section('DEFAULT')
a.set('DEFAULT', 'foo', 'bar')

# working
a.add_section('working')
a.set('working', 'foo', 'bar')

b = open('testing', 'w')
a.write(b)
b.close()

[u...@srv ~]$ python test_configparser.py 
[u...@srv ~]$ cat testing 
[DEFAULT]
foo = bar

[DEFAULT]

[working]
foo = bar


Tested with 2.4 and 2.5, py3k no longer allows DEFAULT to be passed, so
this is a python < 3k issue.

--
components: Extension Modules
messages: 77686
nosy: shawn.ashlee
severity: normal
status: open
title: configparser DEFAULT
versions: Python 2.4, Python 2.5

___
Python tracker 
<http://bugs.python.org/issue4645>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14998] pprint._safe_key is not always safe enough

2012-06-03 Thread Shawn Brown

New submission from Shawn Brown <03sjbr...@gmail.com>:

This is related to resolved issue 3976 and, to a lesser extent, issue 10017.

I've run across another instance where pprint throws an exception (but works 
fine in 2.7 and earlier):

Python 3.2 (r32:88445, Mar 25 2011, 19:28:28) 
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from pprint import pprint
>>> pprint({(0,): 1, (None,): 2})
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/python3.2/pprint.py", line 55, in pprint
printer.pprint(object)
  File "/usr/lib/python3.2/pprint.py", line 132, in pprint
self._format(object, self._stream, 0, 0, {}, 0)
  File "/usr/lib/python3.2/pprint.py", line 155, in _format
rep = self._repr(object, context, level - 1)
  File "/usr/lib/python3.2/pprint.py", line 245, in _repr
self._depth, level)
  File "/usr/lib/python3.2/pprint.py", line 257, in format
return _safe_repr(object, context, maxlevels, level)
  File "/usr/lib/python3.2/pprint.py", line 299, in _safe_repr
items = sorted(object.items(), key=_safe_tuple)
  File "/usr/lib/python3.2/pprint.py", line 89, in __lt__
rv = self.obj.__lt__(other.obj)
TypeError: unorderable types: int() < NoneType()

The above example might seem contrived but I stumbled across the issue quite 
naturally. Honest!

In working with multiple lists and computing results using combinations of 
these lists' values.  I _could_ organize the results as a dictionary of 
dictionaries of dictionaries but that would get confusing very quickly.  
Instead, I'm using a single dictionary with a composite key ("flat is better 
than nested"). So I've got code like this...

>>> combinations = itertools.product(lst_x, lst_y, lst_z)
>>> results = {(x,y,z): compute(x,y,z) for x,y,z in combinations}

... and it is not uncommon for one or more of the values to be None -- 
resulting in the above exception should anyone (including unittest) attempt to 
pprint the dictionary.

--
components: Library (Lib)
messages: 162249
nosy: Shawn.Brown
priority: normal
severity: normal
status: open
title: pprint._safe_key is not always safe enough
versions: Python 3.2

___
Python tracker 
<http://bugs.python.org/issue14998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14998] pprint._safe_key is not always safe enough

2012-06-03 Thread Shawn Brown

Shawn Brown <03sjbr...@gmail.com> added the comment:

Currently, I'm monkey patching _safe_key (adding a try/except) as follows:

>>> import pprint
>>>
>>> class _safe_key(pprint._safe_key):
>>> def __lt__(self, other):
>>> try:
>>> rv = self.obj.__lt__(other.obj)
>>> except TypeError:   # Exception instead of TypeError?
>>> rv = NotImplemented
>>> 
>>> if rv is NotImplemented:
>>> rv = (str(type(self.obj)), id(self.obj)) < \
>>>  (str(type(other.obj)), id(other.obj))
>>> return rv
>>> 
>>> pprint._safe_key = _safe_key
>>> 
>>> pprint.pprint({(0,): 1, (None,): 2})
{(None,): 2, (0,): 1}

--

___
Python tracker 
<http://bugs.python.org/issue14998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14998] pprint._safe_key is not always safe enough

2012-06-07 Thread Shawn Brown

Shawn Brown <03sjbr...@gmail.com> added the comment:

Here's a patch for 3.3 -- as well as two new assertions in test_pprint.py

The added try/catch also fixes the issues mentioned in issue 10017 so I added a 
test for that case as well.

--
keywords: +patch
Added file: http://bugs.python.org/file25864/pprint_safe_key.patch

___
Python tracker 
<http://bugs.python.org/issue14998>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19016] autospecced namedtuples should be truthy by default

2013-09-14 Thread Shawn Krisman

New submission from Shawn Krisman:

import mock
from collections import namedtuple

Foo = namedtuple('Foo', bar)
mock_foo = mock.create_autospec(Foo)

if mock_foo:
print('the namedtuple is truthy')
else:
print('the namedtuple is not truthy')


The expected behavior is that it should print "the namedtuple is truthy." 
Instead it prints "the namedtuple is not truthy." Almost all namedtuples are 
truthy, the exception being the nearly useless namedtuple with 0 fields. The 
problem stems from the fact that tuples have a __len__ defined which is what is 
used derive truthiness. MagicMocks define __len__ to be zero by default. 
Workarounds to the problem are very difficult because you cannot simply define 
__len__ to be a nonzero number and have the mock work correctly. 


In general MagicMock has defaults that encourage the mocks to be very truthy 
all around:

__bool__ -- True
__int__ -- 1
__complex__ -- 1j
__float__ -- 1.0
__index__ -- 1

So it was interesting to me to find out that __len__ was defined to be 0. The 
fix that I am proposing is to make 1 the new default for __len__. I believe 
this is a more useful default in general because an instance of a class with a 
__len__ attribute will likely be truthy far more often then not.


There are of course backwards compatibility issues to consider here, however I 
don't think many people are assuming this behavior. Certainly nobody in the 
python code base.

--
components: Library (Lib)
files: namedtuple_truthiness.patch
keywords: patch
messages: 197698
nosy: michael.foord, skrisman
priority: normal
severity: normal
status: open
title: autospecced namedtuples should be truthy by default
type: behavior
versions: Python 3.5
Added file: http://bugs.python.org/file31755/namedtuple_truthiness.patch

___
Python tracker 
<http://bugs.python.org/issue19016>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19016] autospecced namedtuples should be truthy by default

2013-09-16 Thread Shawn Krisman

Shawn Krisman added the comment:

Yeah in my head I was thinking it would affect relatively few people who 
depended on the change, but it's definitely hard to prove that!

How about a change that special cases namedtuple?

--

___
Python tracker 
<http://bugs.python.org/issue19016>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue19016] autospecced namedtuples should be truthy by default

2013-09-26 Thread Shawn Krisman

Shawn Krisman added the comment:

This fix is actually backwards compatible. This is a more powerful patch too 
because not only does it provide a better default for truthiness, but it also 
provides a better default for length. I also fixed a spelling mistake involving 
the word "calculate".

--
Added file: http://bugs.python.org/file31875/namedtuple_truthiness_2.patch

___
Python tracker 
<http://bugs.python.org/issue19016>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6573] set union method ignores arguments appearing after the original set

2009-07-25 Thread Shawn Smout

New submission from Shawn Smout :

When calling the union method of a set with several arguments, if one of
those sets is the original set, all arguments appearing after it are
ignored.  For example:

x = set()
x.union(set([1]), x, set([2]))

evaluates to set([1]), not set([1, 2]) as expected.  As another example,
since all empty frozensets are the same,

frozenset().union(frozenset([1]), frozenset(), frozenset([2]))

also evaluates to just frozenset([1]).

The fix is trivial, so I'm attaching a patch.

--
files: set_union.patch
keywords: patch
messages: 90925
nosy: ssmout
severity: normal
status: open
title: set union method ignores arguments appearing after the original set
Added file: http://bugs.python.org/file14565/set_union.patch

___
Python tracker 
<http://bugs.python.org/issue6573>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7076] Documentation add note about SystemRandom

2009-10-06 Thread Shawn Ligocki

New submission from Shawn Ligocki :

I did not notice the existence of random.SystemRandom until after I had
implemented my own version. I thought it would be nice to mention it in
the opening section. I've added a tiny note about random.SystemRandom.
What do you guys think, feel free to reword it, I just think that it
should be mentioned.

http://docs.python.org/library/random.html

--
assignee: georg.brandl
components: Documentation
files: random.patch
keywords: patch
messages: 93678
nosy: georg.brandl, sligocki
severity: normal
status: open
title: Documentation add note about SystemRandom
type: feature request
versions: Python 2.6, Python 2.7
Added file: http://bugs.python.org/file15065/random.patch

___
Python tracker 
<http://bugs.python.org/issue7076>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7076] Documentation add note about SystemRandom

2009-10-07 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

Oh, urandom is almost always non-deterministic. It mixes completely
random bits from hardware sources with its pseudo-random number state.
The more random bits it gets from hardware, the less predictable its
output is. However, as long as it's getting any random bits, it's output
is not deterministic (because it's based on some random information).

But perhaps there is better wording that conveys the power of the
urandom source?

--

___
Python tracker 
<http://bugs.python.org/issue7076>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7076] Documentation add note about SystemRandom

2009-10-07 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

Ah, sorry for the misunderstanding. I agree, better not to mislead. 

Perhaps we should side with the urandom documentation and say that it is
a cryptographically secure random number generator with no accessible state?

--

___
Python tracker 
<http://bugs.python.org/issue7076>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7076] Documentation add note about SystemRandom

2009-10-07 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

A major pro for pseudo-random number generators is that they are
deterministic, that is, you can save a load the state, start from the
same seed and reproduce results, etc. At least in science (and probably
other areas) this reproducibility can be vital in a random class.

It really depends on your application though. In my use, I was
originally using normal random to produce seeds for another programs
random number generator. This ended up producing many identical results
and thus not producing an appropriate random sampling. Rather than
trying to figure out a proper way to do this with a PRNG I decided to
just use a completely random source, urandom was close enough for my needs.

I believe that is its strongest value, not having the strange artifacts
that PRNGs have. But I'm not completely sure how true that claim is :)

--

___
Python tracker 
<http://bugs.python.org/issue7076>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7076] Documentation add note about SystemRandom

2009-11-02 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

I rewrote the description, mostly using the claims form urandom, so that
we don't claim something new. What do you guys think?

--
Added file: http://bugs.python.org/file15251/random.patch

___
Python tracker 
<http://bugs.python.org/issue7076>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7076] Documentation add note about SystemRandom

2009-11-02 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

So, all I really want to do is call attention to SystemRandom from the
top of the page, because it is easily not noticed at the bottom. Do you
guys have any suggestions for how to do that that doesn't repeat too
much and doesn't claim things that you aren't comfortable with claiming?

--

___
Python tracker 
<http://bugs.python.org/issue7076>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7076] Documentation add note about SystemRandom

2009-11-02 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

How about this, sweet and simple.

--
Added file: http://bugs.python.org/file15252/random.patch

___
Python tracker 
<http://bugs.python.org/issue7076>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7076] Documentation add note about SystemRandom

2009-11-02 Thread Shawn Ligocki

Shawn Ligocki  added the comment:

There is a whole paragraph about WichmanHill at the top of this page
already and (if anything) I think that WichmanHill is less notable
(basically only used in legacy applications). However SystemRandom is
very useful. I don't want to make claims about urandom that I can't back
up, but urandom is very useful and I think that there ought to be some
note of it in the opening for people who want a stronger random
instance. All I'm suggesting is a sentence to point it out. That would
have been enough for me not to have reinvented the wheel.

--

___
Python tracker 
<http://bugs.python.org/issue7076>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com