Thanks Eric.
Also relevant: https://github.com/numba/numba/issues/909
Looks like Numba has found a way to avoid this edge case.
On Monday, April 4, 2016, Eric Firing wrote:
> On 2016/04/04 9:23 AM, T J wrote:
>
>> I'm on NumPy 1.10.4 (mkl).
>>
>> >>>
I'm on NumPy 1.10.4 (mkl).
>>> np.uint(3) // 2 # 1.0
>>> 3 // 2 # 1
Is this behavior expected? It's certainly not desired from my perspective.
If this is not a bug, could someone explain the rationale to me.
Thanks.
___
NumPy-Discussion mailing lis
It does, but it is not portable. That's why I was hoping NumPy might think
about supporting more rounding algorithms.
On Thu, Oct 2, 2014 at 10:00 PM, John Zwinck wrote:
> On 3 Oct 2014 07:09, "T J" wrote:
> >
> > Any bites on this?
> >
> > On
Any bites on this?
On Wed, Sep 24, 2014 at 12:23 PM, T J wrote:
> Is there a ufunc for rounding away from zero? Or do I need to do
>
> x2 = sign(x) * ceil(abs(x))
>
> whenever I want to round away from zero? Maybe the following is better?
>
> x_ceil = ceil(x)
&g
Hi, I'm using NumPy 1.8.2:
In [1]: np.array(0) / np.array(0)
Out[1]: 0
In [2]: np.array(0) / np.array(0.0)
Out[2]: nan
In [3]: np.array(0.0) / np.array(0)
Out[3]: nan
In [4]: np.array(0.0) / np.array(0.0)
Out[4]: nan
In [5]: 0/0
--
Is there a ufunc for rounding away from zero? Or do I need to do
x2 = sign(x) * ceil(abs(x))
whenever I want to round away from zero? Maybe the following is better?
x_ceil = ceil(x)
x_floor = floor(x)
x2 = where(x >= 0, x_ceil, x_floor)
Python's round function goes away from zer
What is the status of:
https://github.com/numpy/numpy/blob/master/doc/neps/missing-data.rst
and of missing data in Numpy, more generally?
Is np.ma.array still the "state-of-the-art" way to handle missing data? Or
has something better and more comprehensive been put together?
_
On Tue, Mar 12, 2013 at 9:59 AM, Bradley M. Froehle
wrote:
> T J:
>
> You may want to look into `numpy.frompyfunc` (
> http://docs.scipy.org/doc/numpy/reference/generated/numpy.frompyfunc.html
> ).
>
>
Yeah that's better, but it doesn't respect the output type of
Prior to 1.7, I had working compatibility code such as the following:
if has_good_functions:
# http://projects.scipy.org/numpy/ticket/1096
from numpy import logaddexp, logaddexp2
else:
logaddexp = vectorize(_logaddexp, otypes=[numpy.float64])
logaddexp2 = vectorize(_logaddexp2, ot
On Fri, Oct 12, 2012 at 1:04 PM, Sturla Molden wrote:
> I'm still rather sure GIS functionality belongs in scipy.spatial instead
> of numpy.
>
>
>From the link:
"""
FocalMax
Finds the highest value for each cell location on an input grid within
a specified neighborhood and sends it to the corre
On Sat, Jun 30, 2012 at 1:50 PM, srean wrote:
>
> Anecdotal data-point:
> I have been happy with SO in general. It works for certain types of
> queries very well. OTOH if the answer to the question is known only to
> a few and he/she does not happen to be online at time the question
> was poste
On Sat, Jun 30, 2012 at 1:26 PM, wrote:
> just some statistics
>
> http://stackoverflow.com/questions/tagged/numpy
> 769 followers, 2,850 questions tagged
>
> a guess: average response time for regular usage question far less than an
> hour
>
> http://stackoverflow.com/questions/tagged/scipy
> 4
On Thu, Jun 28, 2012 at 3:23 PM, Fernando Perez wrote:
> On Thu, Jun 28, 2012 at 3:06 PM, srean wrote:
> > What I like about having two lists is that on one hand it does not
> > prevent me or you from participating in both, on the other hand it
> > allows those who dont want to delve too deeply
On Wed, May 23, 2012 at 4:16 PM, Kathleen M Tacina <
kathleen.m.tac...@nasa.gov> wrote:
> **
> On Wed, 2012-05-23 at 17:31 -0500, Nathaniel Smith wrote:
>
> On Wed, May 23, 2012 at 10:53 PM, Travis Oliphant
> wrote:> To be clear, I'm not opposed to the change, and it looks like we
> should go f
On Fri, May 11, 2012 at 1:12 PM, Mark Wiebe wrote:
> On Fri, May 11, 2012 at 2:18 PM, Pauli Virtanen wrote:
>
>> 11.05.2012 17:54, Frédéric Bastien kirjoitti:
>> > In Theano we use a view, but that is not relevant as it is the
>> > compiler that tell what is inplace. So this is invisible to the
On Wed, Feb 15, 2012 at 12:45 PM, Alan G Isaac wrote:
> for the core developers. The right way to produce a
> governance structure is to make concrete proposals and
> show how these proposals are in the interest of the
> *developers* (as well as of the users).
>
>
At this point, it seems to me
On Sat, Nov 5, 2011 at 12:55 AM, Nathaniel Smith wrote:
> On Fri, Nov 4, 2011 at 8:33 PM, T J wrote:
> > On Fri, Nov 4, 2011 at 8:03 PM, Nathaniel Smith wrote:
> >> Again, I really don't think you're going to be able to sell an API where
> >> [2] + [IGNO
On Fri, Nov 4, 2011 at 8:03 PM, Nathaniel Smith wrote:
> On Fri, Nov 4, 2011 at 7:43 PM, T J wrote:
> > On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen wrote:
> >> An acid test for proposed rules: given two arrays `a` and `b`,
> >>
> >> a = [1, 2, I
On Fri, Nov 4, 2011 at 6:31 PM, Pauli Virtanen wrote:
> 05.11.2011 00:14, T J kirjoitti:
> [clip]
> > a = 1
> > a += 2
> > a += IGNORE
> > b = 1 + 2 + IGNORE
> >
> > I think having a == b is essential. If they can be different, that
in a wider sense, as an example
> from "T J" shows:
>
> a = 1
> a += IGNORE(3)
> # -> a := a + IGNORE(3)
> # -> a := IGNORE(4)
> # -> a == IGNORE(1)
>
> which is different from
>
> a = 1 + IGNORE(3)
> # -> a == IGNORE(4)
>
> Damn, it seeme
On Fri, Nov 4, 2011 at 3:38 PM, Nathaniel Smith wrote:
> On Fri, Nov 4, 2011 at 3:08 PM, T J wrote:
> > On Fri, Nov 4, 2011 at 2:29 PM, Nathaniel Smith wrote:
> >> Continuing my theme of looking for consensus first... there are
> >> obviously a ton of ugly corners in
On Fri, Nov 4, 2011 at 2:29 PM, Nathaniel Smith wrote:
> On Fri, Nov 4, 2011 at 1:22 PM, T J wrote:
> > I agree that it would be ideal if the default were to skip IGNORED
> values,
> > but that behavior seems inconsistent with its propagation properties
> (such
> >
On Fri, Nov 4, 2011 at 2:41 PM, Pauli Virtanen wrote:
> 04.11.2011 20:49, T J kirjoitti:
> [clip]
> > To push this forward a bit, can I propose that IGNORE behave as: PnC
>
> The *n* classes can be a bit confusing in Python:
>
> ### PnC
>
> >>> x = np.
On Fri, Nov 4, 2011 at 1:03 PM, Gary Strangman
wrote:
>
> To push this forward a bit, can I propose that IGNORE behave as: PnC
>>
>> >>> x = np.array([1, 2, 3])
>> >>> y = np.array([10, 20, 30])
>> >>> ignore(x[2])
>> >>> x
>> [1, IGNORED(2), 3]
>> >>> x + 2
>> [3, IGNORED(4), 5]
>> >>> x + y
>>
On Fri, Nov 4, 2011 at 11:59 AM, Pauli Virtanen wrote:
>
> I have a feeling that if you don't start by mathematically defining the
> scalar operations first, and only after that generalize them to arrays,
> some conceptual problems may follow.
>
Yes. I was going to mention this point as well.
>
On Mon, Oct 17, 2011 at 12:45 PM, eat wrote:
>
> Just wondering what are the main benefits, of your approach, comparing to
> simple:
As I hinted, my goal was not to construct a "practical" example, but
rather, to demonstrate how to use the neighborhood iterator in Cython.
Roll and mod are quite
I recently put together a Cython example which uses the neighborhood
iterator. It was trickier than I thought it would be, so I thought to
share it with the community. The function takes a 1-dimensional array
and returns a 2-dimensional array of neighborhoods in the original
area. This is somewha
While reading the documentation for the neighborhood iterator, it
seems that it can only handle rectangular neighborhoods. Have I
understood this correctly? If it is possible to do non-rectangular
regions, could someone post an example/sketch of how to do this?
___
On Mon, Apr 25, 2011 at 9:57 AM, Gael Varoquaux
wrote:
>
> We thought that we could simply have a PRNG per object, as in:
>
> def __init__(self, prng=None):
> if prng is None:
> prng = np.random.RandomState()
> self.prng = prng
>
> I don't like this option, because it m
On Wed, Jan 26, 2011 at 5:02 PM, Joshua Holbrook
wrote:
>> Ah, sorry for misunderstanding. That would actually be very difficult,
>> as the iterator required a fair bit of fixes and adjustments to the core.
>> The new_iterator branch should be 1.5 ABI compatible, if that helps.
>
> I see. Perhaps
On Fri, May 21, 2010 at 8:51 AM, Pauli Virtanen wrote:
> Fri, 21 May 2010 08:09:55 -0700, T J wrote:
>> I tried upgrading today and had trouble building numpy (after rm -rf
>> build). My full build log is here:
>>
>> http://www.filedump.net/dumped/build1274454454.t
Hi,
I tried upgrading today and had trouble building numpy (after rm -rf
build). My full build log is here:
http://www.filedump.net/dumped/build1274454454.txt
If someone can point me in the right direction, I'd appreciate it very
much. To excerpts from the log file:
Running from numpy so
On Mon, May 10, 2010 at 8:37 PM, wrote:
>
> I went googling and found a new interpretation
>
> numpy.random.pareto is actually the Lomax distribution also known as Pareto 2,
> Pareto (II) or Pareto Second Kind distribution
>
Great!
>
> So, from this it looks like numpy.random does not have a Pa
On Sun, May 9, 2010 at 4:49 AM, wrote:
>
> I think this is the same point, I was trying to make last year.
>
> Instead of renormalizing, my conclusion was the following,
> (copied from the mailinglist August last year)
>
> """
> my conclusion:
> -
> What numpy.random.pareto ac
The docstring for np.pareto says:
This is a simplified version of the Generalized Pareto distribution
(available in SciPy), with the scale set to one and the location set to
zero. Most authors default the location to one.
and also:
The probability density for the Pareto distribut
On Thu, May 6, 2010 at 10:36 AM, wrote:
>
> there is a thread last august on unique rows which might be useful,
> and a thread in Dec 2008 for sorting rows
>
> something like
>
> np.unique1d(c.view([('',c.dtype)]*c.shape[1])).view(c.dtype).reshape(-1,c.shape[1])
>
> maybe it's np.unique with nump
On Thu, May 6, 2010 at 10:34 AM, Keith Goodman wrote:
> On Thu, May 6, 2010 at 10:25 AM, T J wrote:
>> Hi,
>>
>> Is there a way to sort the columns in an array? I need to sort it so
>> that I can easily go through and keep only the unique columns.
>> ndarray.so
Hi,
Is there a way to sort the columns in an array? I need to sort it so
that I can easily go through and keep only the unique columns.
ndarray.sort(axis=1) doesn't do what I want as it destroys the
relative ordering between the various columns. For example, I would
like:
[[2,1,3],
[3,5,1],
[0
On Mon, Apr 26, 2010 at 10:03 AM, Charles R Harris
wrote:
>
>
> On Mon, Apr 26, 2010 at 10:55 AM, Charles R Harris
> wrote:
>>
>> Hi All,
>>
>> We need to make a decision for ticket #1123 regarding what nansum should
>> return when all values are nan. At some earlier point it was zero, but
>> cur
On Mon, Apr 5, 2010 at 11:28 AM, Robert Kern wrote:
> On Mon, Apr 5, 2010 at 13:26, Erik Tollerud wrote:
>> Hmm, unfortunate. So the best approach then is probably just to tell
>> people to install numpy first, then my package?
>
> Yup.
>
And really, this isn't that unreasonable. Not only does
On Wed, Mar 31, 2010 at 7:06 PM, Charles R Harris
wrote:
>
> That is a 32 bit kernel, right?
>
Correct.
Regarding the config.h, which config.h? I have a numpyconfig.h.
Which compilation options should I obtain and how? When I run
setup.py, I see:
C compiler: gcc -pthread -fno-strict-aliasing
On Wed, Mar 31, 2010 at 3:38 PM, David Warde-Farley wrote:
> Unfortunately there's no good way of getting around order-of-
> operations-related rounding error using the reduce() machinery, that I
> know of.
>
That seems reasonable, but receiving a nan, in this case, does not.
Are my expectations
On Wed, Mar 31, 2010 at 3:36 PM, Charles R Harris
wrote:
>> So this is "expected" behavior?
>>
>> In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
>> Out[1]: -1.5849625007211561
>>
>> In [2]: np.logaddexp2(-0.5849625007211563, -53.584962500721154)
>> Out[2]: nan
>>
> I don't see tha
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote:
>
> Looks like roundoff error.
>
So this is "expected" behavior?
In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
Out[1]: -1.5849625007211561
In [2]: np.logaddexp2(-0.5849625007211563, -53.584962500721154)
Out[2]: nan
In [
On Wed, Mar 31, 2010 at 10:30 AM, T J wrote:
> Hi,
>
> I'm getting some strange behavior with logaddexp2.reduce:
>
> from itertools import permutations
> import numpy as np
> x = np.array([-53.584962500721154, -1.5849625007211563, -0.5849625007211563])
> for p in perm
Hi,
I'm getting some strange behavior with logaddexp2.reduce:
from itertools import permutations
import numpy as np
x = np.array([-53.584962500721154, -1.5849625007211563, -0.5849625007211563])
for p in permutations([0,1,2]):
print p, np.logaddexp2.reduce(x[list(p)])
Essentially, the result
When passing in a list of longs and asking that the dtype be a float
(yes, losing precision), the error message is uninformative whenever
the long is larger than the largest float.
>>> x =
>>> 18162664233348664066431651147991808763481175659998486127848191363485244685895222694105917846256694202714
Hi,
Suppose I have an array of shape: (n, k, k). In this case, I have n
k-by-k matrices. My goal is to compute the product of a (potentially
large) user-specified selection (with replacement) of these matrices.
For example,
x = [0,1,2,1,3,3,2,1,3,2,1,5,3,2,3,5,2,5,3,2,1,3,5,6]
says that I
On Mon, Sep 7, 2009 at 3:43 PM, T J wrote:
> Or perhaps I am just being dense.
>
Yes. I just tried to reinvent standard matrix multiplication.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo
On Mon, Sep 7, 2009 at 3:27 PM, T J wrote:
> On Mon, Sep 7, 2009 at 7:09 AM, Hans-Andreas Engel wrote:
>> If you wish to avoid the extra memory allocation implied by `x*y'
>> and get a ~4x speed-up, you can use a generalized ufunc
>> (numpy >= 1.3, stolen f
On Mon, Sep 7, 2009 at 7:09 AM, Hans-Andreas Engel wrote:
> If you wish to avoid the extra memory allocation implied by `x*y'
> and get a ~4x speed-up, you can use a generalized ufunc
> (numpy >= 1.3, stolen from the testcases):
>
> z = numpy.core.umath_tests.inner1d(x, y)
>
This is exactly what
Is there a better way to achieve the following, perhaps without the
python for loop?
>>> x.shape
(1,3)
>>> y.shape
(1,3)
>>> z = empty(len(x))
>>> for i in range(1):
...z[i] = dot(x[i], y[i])
...
___
NumPy-Discussion mailing list
NumPy-Di
On Sat, Aug 8, 2009 at 10:09 PM, David Warde-Farley wrote:
> On 9-Aug-09, at 12:36 AM, T J wrote:
>
>>>>> z = array([1,2,3,4])
>>>>> z[[1]]
>> array([1])
>>>>> z[(1,)]
>> 1
>>
> In the special case of scalar indices they
>>> z = array([1,2,3,4])
>>> z[[1]]
array([1])
>>> z[(1,)]
1
I'm just curious: What is the motivation for this differing behavior?
Is it a necessary consequence of, for example, the following:
>>> z[z<3]
array([1,2])
___
NumPy-Discussion mailing list
N
On Sat, Aug 8, 2009 at 8:54 PM, Neil Martinsen-Burrell wrote:
>
> The ellipsis is a built-in python constant called Ellipsis. The colon
> is a slice object, again a python built-in, called with None as an
> argument. So, z[...,2,:] == z[Ellipsis,2,slice(None)].
>
Very helpful! Thank you. I did
On Fri, Aug 7, 2009 at 11:54 AM, T J wrote:
> The reduce function of ufunc of a vectorized function doesn't seem to
> respect the dtype.
>
>>>> def a(x,y): return x+y
>>>> b = vectorize(a)
>>>> c = array([1,2])
>>>> b(c, c) # use on
I have an array, and I need to index it like so:
z[...,x,:]
How can I write code which will index z, as above, when x is not known
ahead of time. For that matter, the particular dimension I am querying
is not known either. In case this is still confusing, I am looking
for the NumPy way to do
Oh. b.shape = (2,). So I suppose the second to last dimension is, in
fact, the last dimension...and 2 == 2.
nvm
On Fri, Aug 7, 2009 at 2:19 PM, T J wrote:
> Hi, the documentation for dot says that a value error is raised if:
>
> If the last dimension of a is not the same si
Hi, the documentation for dot says that a value error is raised if:
If the last dimension of a is not the same size as the
second-to-last dimension of b.
(http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.htm)
This doesn't appear to be the case:
>>> a = array([[1,2],[3,4]])
>>>
The reduce function of ufunc of a vectorized function doesn't seem to
respect the dtype.
>>> def a(x,y): return x+y
>>> b = vectorize(a)
>>> c = array([1,2])
>>> b(c, c) # use once to populate b.ufunc
>>> d = b.ufunc.reduce(c)
>>> c.dtype, type(d)
dtype('int32'),
>>> c = array([[1,2,3],[4,5,6]]
I was wondering why vectorize doesn't make the ufunc available at the
topmost level
>>> def a(x,y): return x + y
>>> b = vectorize(a)
>>> b.reduce
Instead, the ufunc is stored at b.ufunc.
Also, b.ufunc.reduce() doesn't seem to exist until I *use* the
vectorized function at least once. Can
Hi,
Is there a good way to perform dot on an arbitrary list of arrays
which avoids using a loop? Here is what I'd like to avoid:
# m1, m2, m3 are arrays
>>> out = np.(m1.shape[0])
>>> prod = [m1, m2, m3, m1, m2, m3, m3, m2]
>>> for m in prod:
... out = np.dot(out, m)
...
I was hoping for somet
On Tue, Jan 20, 2009 at 6:57 PM, Neal Becker wrote:
> It seems the big chunks of time are used in data conversion between numpy
> and my own vectors classes. Mine are wrappers around boost::ublas. The
> conversion must be falling back on a very inefficient method since there is no
> special code
>>> import numpy as np
>>> x = np.ones((3,0))
>>> x
array([], shape(3,0), dtype=float64)
To preempt, I'm not really concerned with the answer to: Why would
anyone want to do this?
I just want to know what is happening. Especially, with
>>> x[0,:] = 5
(which works). It seems that nothing is r
On Mon, Nov 10, 2008 at 4:05 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> I added log2 and exp2. I still need to do the complex versions. I think
> logaddexp2 should go in also to compliment these.
Same here, especially since logaddexp is present. Or was the idea
that both logexpadd and lo
On Thu, Nov 6, 2008 at 3:01 PM, T J <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
>> I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I
>> don't want to clutter up numpy with a
On Fri, Nov 7, 2008 at 2:16 AM, David Cournapeau
<[EMAIL PROTECTED]> wrote:
>
> And you have no site.cfg at all ?
>
Wow. I was too focused on the current directory and didn't realize I
had an old site.cfg in ~/.
Two points:
1) Others (myself included) might catch such silliness sooner if the
loc
On Fri, Nov 7, 2008 at 1:48 AM, David Cournapeau
<[EMAIL PROTECTED]> wrote:
>
> It works for me on Intrepid (64 bits). Did you install
> libatlas3gf-base-dev ? (the names changed in intrepid).
>
I fear I am overlooking something obvious.
$ sudo aptitude search libatlas
p libatlas-3dnow-dev
On Fri, Nov 7, 2008 at 1:58 AM, T J <[EMAIL PROTECTED]> wrote:
>
> That the fortran wrappers were compiled using g77 is also apparent via
> what is printed out during setup when ATLAS is detected:
>
> gcc -pthread _configtest.o -L/usr/lib/atlas -llapack -lblas -o _configtest
On Fri, Nov 7, 2008 at 1:26 AM, David Cournapeau
<[EMAIL PROTECTED]> wrote:
> David Cournapeau wrote:
>>
>> Ok, I took a brief look at this: I forgot that Ubuntu and Debian added
>> an aditional library suffix to libraries depending on gfortran ABI. I
>> added support for this in numpy.distutils -
On Thu, Nov 6, 2008 at 2:36 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
> I could add exp2, log2, and logaddexp2 pretty easily. Almost too easily, I
> don't want to clutter up numpy with a lot of functions. However, if there is
> a community for these functions I will put them in.
>
I worry ab
On Thu, Nov 6, 2008 at 2:17 PM, T J <[EMAIL PROTECTED]> wrote:
>
> The interest is in information theory, where quantities are
> (standardly) represented in bits.
I think this is also true in the machine learning community.
___
Numpy-dis
On Thu, Nov 6, 2008 at 1:48 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
> What is your particular interest in these other bases and why would
> they be better than working in base e and converting at the end?
The interest is in information theory, where quantities are
(standardly) represented
On Wed, Nov 5, 2008 at 2:09 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
> I'm inclined to go with logaddexp and add logsumexp as an alias for
> logaddexp.reduce. But I'll wait until tomorrow to see if there are more
> comments.
When working in other bases, it seems like it would be good to avo
On Wed, Nov 5, 2008 at 12:00 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
> Hmm I wonder if the base function should be renamed logaddexp, then
> logsumexp would apply to the reduce method. Thoughts?
>
As David mentioned, logsumexp is probably the traditional name, but as
the earlier link s
On Tue, Nov 4, 2008 at 9:37 PM, Anne Archibald
<[EMAIL PROTECTED]> wrote:
> 2008/11/5 Charles R Harris <[EMAIL PROTECTED]>:
>> Hi All,
>>
>> I'm thinking of adding some new ufuncs. Some possibilities are
>>
>> expadd(a,b) = exp(a) + exp(b) -- For numbers stored as logs:
>
> Surely this should be lo
On Mon, Nov 3, 2008 at 10:46 AM, T J <[EMAIL PROTECTED]> wrote:
>
> Since these are all in the standard locations, I am building without a
> site.cfg. Here is the beginning info:
>
Apparently, this is not enough. Only if I also set the ATLAS
environment variable am I able to g
Numpy doesn't seem to be finding my atlas install. Have I done
something wrong or misunderstood?
$ cd /usr/lib
$ ls libatlas*
libatlas.a libatlas.so libatlas.so.3gf libatlas.so.3gf.0
$ ls libf77*
libf77blas.a libf77blas.so libf77blas.so.3gf libf77blas.so.3gf.0
$ ls libcblas*
libcblas.a lib
Sorrywrong list.
On Sun, Nov 2, 2008 at 11:34 AM, T J <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm having trouble installing PyUblas 0.93.1 (same problems from the
> current git repository). I'm in ubuntu 8.04 with standard boost
> packages (1.34.1, I believe
On Mon, Oct 20, 2008 at 2:20 AM, A. G. wrote:
> one well attached to 2 or more units). Is there any simple way in
> numpy (scipy?) in which I can get the number of possible combinations
> of wells attached to the different 3 units, without repetitions? For
> example, I could have all 60 wells attac
On Tue, Oct 14, 2008 at 1:02 AM, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> b) I don't want to use Python / numpy API code in the C functions I'm
> wrapping - so I limit myself to "input" arrays! Since array memory
> does not distinguish between input or output (assuming there is no
> copying nee
Hi,
I'm new to using SWIG and my reading of numpy_swig.pdf tells me that
the following typemap does not exist:
(int* ARGOUT_ARRAY2, int DIM1, int DIM2)
What is the recommended way to output a 2D array? It seems like I should use:
(int* ARGOUT_ARRAY1, int DIM1)
and then provide a python fu
Hi,
I'm getting a couple of test failures with Python 2.6, Numpy 1.2.0, Nose 0.10.4:
nose version 0.10.4
..
On 5/8/08, Anne Archibald <[EMAIL PROTECTED]> wrote:
> Is "logarray" really the way to handle it, though? it seems like you
> could probably get away with providing a logsum ufunc that did the
> right thing. I mean, what operations does one want to do on logarrays?
>
> add -> logsum
> subtract -> ?
On Thu, May 8, 2008 at 12:26 AM, T J <[EMAIL PROTECTED]> wrote:
>
> >>> x = array([-2,-2,-3], base=2)
> >>> y = array([-1,-2,-inf], base=2)
> >>> z = x + y
> >>> z
> array([-0.415037499279, -1.0, -3])
> >>> z = x *
Hi,
For precision reasons, I almost always need to work with arrays whose
elements are log values. My thought was that it would be really neat
to have a 'logarray' class implemented in C or as a subclass of the
standard array class. Here is a sample of how I'd like to work with
these objects:
>
86 matches
Mail list logo