At the risk of igniting a flame war...can someone please help me understand
the indexing behavior of NumPy? I will readily I admit I come from a Matlab
background, but I appreciate the power of Python and am trying to learn more.
>From a Matlab user's perspective, the behavior of indexing in NumP
ab))
>>>>>> # Collect tols
>>>>>> tols[i] = tol_s0, tol_inf, tol_ub_inf, tol_ub2_inf, tol_mlab, S.min()
>>>>>>
>>>>>> rel_tols = tols / tols[:, -1][:, None]
>>>>>>
>>>>>> fmt = 'Percent undetected %s: %3.1f, tol / S.min(): %2.3f'
>>>>>> max_rank = min(M, N)
>>>>>> for name, ranks, mrt in zip(
>>>>>> ('current', 'inf norm', 'upper bound inf norm',
>>>>>> 'upper bound inf norm * 2', 'MATLAB'),
>>>>>> (ranks, ranks_inf, ranks_ub_inf, ranks_ub2_inf, ranks_mlab),
>>>>>> rel_tols.mean(axis=0)[:5]):
>>>>>> pcnt = np.sum(np.array(ranks) == max_rank) / 1000. * 100
>>>>>> print fmt % (name, pcnt, mrt)
>>>>>>
>>>>>
>>>>>
>>>>> The polynomial fitting uses eps times the largest array dimension for the
>>>>> relative condition number. IIRC, that choice traces back to numerical
>>>>> recipes.
>>>
>>> Chuck - sorry - I didn't understand what you were saying, and now I
>>> think you were proposing the MATLAB algorithm. I can't find that in
>>> Numerical Recipes - can you? It would be helpful as a reference.
>>>
>>>> This is the same as Matlab, right?
>>>
>>> Yes, I believe so, i.e:
>>>
>>> tol = S.max() * np.finfo(M.dtype).eps * max((m, n))
>>>
>>> from my original email.
>>>
>>>> If the Matlab condition is the most conservative, then it seems like a
>>>> reasonable choice -- conservative is good so long as your false
>>>> positive rate doesn't become to high, and presumably Matlab has enough
>>>> user experience to know whether the false positive rate is too high.
>>>
>>> Are we agreeing to go for the Matlab algorithm?
>>>
>>> If so, how should this be managed? Just changing it may change the
>>> output of code using numpy >= 1.5.0, but then again, the threshold is
>>> probably incorrect.
>>>
>>> Fix and break or FutureWarning with something like:
>>>
>>> def matrix_rank(M, tol=None):
>>>
>>> where ``tol`` can be a string like ``maxdim``?
>>
>> I dunno, I don't think we should do a big deprecation dance for every
>> bug fix. Is this a bug fix, so numpy will simply start producing more
>> accurate results on a given problem? I guess there isn't really a
>> right answer here (though claiming that [a, b, a+b] is full-rank is
>> clearly broken, and the matlab algorithm seems reasonable for
>> answering the specific question of whether a matrix is full rank), so
>> we'll have to hope some users speak up...
>
>I don't see a problem changing this as a bugfix.
>statsmodels still has, I think, the original scipy.stats.models
>version for rank which is still much higher for any non-huge array and
>float, cond=1.0e-12.
>
>Josef
+ 1 for making the default "matlab" : it sounds like it would be the least
confusing. It also seems to me that a bug fix is probably right procedure.
Last, I like best having only the matlab default (options seem uncessary).
cheers
JB
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hello:
I'm hoping someone can straighten me out. I have a 64 bit fedora 8
quad core machine and can install blas and lapack from the yum
repository. With these, numpy installs fine and finds blas and
lapack.
I also tried removing the yum blas/lapack libs and installing atlas
via the instructions