yields
array([[ 0., -1., 0., 0.],
[ 1., 0., -1., 0.],
[ 0., 1., 0., -1.],
[ 0., 0., 1., 0.]])
Hope this helps,
Ian Henriksen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
On Wed, Aug 31, 2016 at 3:57 PM Jason Newton wrote:
> Hey Ian - I hope I gave Cython a fair comment, but I have to add the
> disclaimer that your capability to understand/implement those
> solutions/workarounds in that project is greatly enhanced from your knowing
> the innards of
rs or variadic
templates will often require a some extra C++ code to work them in to
something
that Cython can understand. In my experience, those particular limitations
haven't
been that hard to work with.
Best,
Ian Henriksen
On Wed, Aug 31, 2016 at 12:20 PM Jason Newton wrote:
> I just wan
Personally, I think this is a great idea. +1 to more informative errors.
Best,
Ian Henriksen
On Mon, Jun 13, 2016 at 2:11 PM Nathaniel Smith wrote:
> It was recently pointed out:
>
> https://github.com/numpy/numpy/issues/7730
>
> that this code silently truncates floats
ing 64 bit integers default in more cases would help here.
Currently arange gives 32 bit integers on 64 bit Windows, but
64 bit integers on 64 bit Linux/OSX. Using size_t (or even
int64_t) as the default size would help with overflows in
the more common use cases. It's a hefty backc
doing
operations with integers, I expect integer output, regardless of floor
division and overflow.
There's a lot to both sides of the argument though. Python's arbitrary
precision integers alleviate overflow concerns very nicely, but forcing
float output for people who actually want integers is not at all ideal
either.
Best,
Ian Henriksen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
On Mon, Apr 11, 2016 at 5:24 PM Chris Barker wrote:
> On Fri, Apr 8, 2016 at 4:37 PM, Ian Henriksen <
> insertinterestingnameh...@gmail.com> wrote:
>
>
>> If we introduced the T2 syntax, this would be valid:
>>
>> a @ b.T2
>>
>> It makes the intent
f we introduced the T2 syntax, this would be valid:
a @ b.T2
It makes the intent much clearer. This helps readability even more when
you're
trying to put together something that follows a larger equation while still
broadcasting correctly.
Does this help make the use cases a bit clearer?
Best
common reshape
operations.
It seems like it would cover all the needed cases for simplifying
expressions
involving matrix multiplication. It's not totally clear what the semantics
should be
for higher dimensional arrays or 2D arrays that already have a shape
incompatible
with the one
On Thu, Apr 7, 2016 at 1:53 PM wrote:
> On Thu, Apr 7, 2016 at 3:26 PM, Ian Henriksen <
> insertinterestingnameh...@gmail.com> wrote:
>
>> On Thu, Apr 7, 2016 at 12:31 PM wrote:
>>
>>> write unit tests with non square 2d arrays and the exception / test
&g
On Thu, Apr 7, 2016 at 12:31 PM wrote:
> write unit tests with non square 2d arrays and the exception / test error
> shows up fast.
>
> Josef
>
>
Absolutely, but good programming practices don't totally obviate helpful
erro
On Thu, Apr 7, 2016 at 12:18 PM Chris Barker wrote:
> On Thu, Apr 7, 2016 at 10:00 AM, Ian Henriksen <
> insertinterestingnameh...@gmail.com> wrote:
>
>> Here's another example that I've seen catch people now and again.
>>
>> A = np.random.rand(100,
r
becomes
difficult to find. This error isn't necessarily unique to beginners either.
It's a
common typo that catches intermediate users who know about broadcasting
semantics but weren't keeping close enough track of the dimensionality of
the
dcasting usually prepends ones to fill in
missing
dimensions and the fact that our current linear algebra semantics often
treat rows
as columns, but making 1-D arrays into rows makes a lot of sense as far as
user
experience goes.
Great ideas everyone!
Best,
-Ian Henriksen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
re
the case here too.
In general, you can't expect Linux distros to have a uniform shared object
interface
for LAPACK, so you don't gain much by using the version that ships with
CentOS 5 beyond not having to compile it all yourself. It might be better
to use a
newer LAPACK built from source with the older toolchains already there.
Best,
-Ian Henriksen
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
On Tue, Dec 22, 2015 at 12:14 AM Ralf Gommers
wrote:
> That would be quite useful I think. 32/64-bit issues are mostly orthogonal
> to py2/py3 ones, so may only a 32-bit Python 3.5 build to keep things fast?
>
Done in https://github.com/numpy/numpy/pull/6874.
Hope this he
that the speed difference is so large.
Best,
-Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
installation, there's a demo
at
https://github.com/ogrisel/python-appveyor-demo that looks promising as
well. I avoided that
initially because it's a lot more complex than just a single appveyor.yml
file.
Best,
-Ian
___
NumPy-Discussion mai
On Fri, Dec 18, 2015 at 3:27 PM Nathaniel Smith wrote:
> On Dec 18, 2015 2:22 PM, "Ian Henriksen" <
> insertinterestingnameh...@gmail.com> wrote:
> >
> > An appveyor setup is a great idea. An appveyor build matrix with the
> > various supported MSVC
gt; We need to ensure that the MSVC builds work. But that's not new, that was
> always necessary for a release. Christophe has always tested beta/rc
> releases which is super helpful, but we need to get Appveyor CI to work
> soon.
>
> Ralf
>
>
> ___
.)
>
Yes, there are ABI patches in SciPy that put together a uniform ABI for all
currently supported BLAS and LAPACK libraries (
https://github.com/scipy/scipy/tree/master/scipy/_build_utils). With a
little luck, some of these other libraries won't need as much special
casing, but, I don
cipy.
>
Supporting Eigen sounds like a great idea. BLIS would be another
one worth supporting at some point. As far as implementation goes, it may
be helpful to look at https://github.com/numpy/numpy/pull/3642 and
https://github.com/numpy/numpy/pull/4191 for the corresponding set of
change
the current amount of work that is
coming from the 1.10 release, but this sounds like a really great idea.
Computed/fixed dimensions would allow gufuncs for things like:
- polynomial multiplication, division, differentiation, and integration
- convolutions
- views of different types (see the
se and reverse the order of the SVD, but taking the
transposes once the algorithm is finished will ensure they are C ordered.
You could also use np.ascontiguousarray on the output arrays, though that
results in unnecessary copies that change the memory layout.
Best of luck!
-Ian Henriksen
ning when the default behavior is expected.
The first point would be a break in backwards compatibility, so I'm not
sure if it's feasible at this point. The advantage would be that all all
arrays returned when using this functionality would be contiguous along the
last axis. The shape
On Mon, Oct 13, 2014 at 8:39 PM, Charles R Harris wrote:
>
>
> On Mon, Oct 13, 2014 at 12:54 PM, Sebastian Berg <
> sebast...@sipsolutions.net> wrote:
>
>> On Mo, 2014-10-13 at 13:35 +0200, Daniele Nicolodi wrote:
>> > Hello,
>> >
>> > I have a C++ application that collects float, int or complex
ision to return inf since that is what IEEE-754 arithmetic
> specifies.
>
> For me the question is if the floor division should also perform a cast
> to an integer type. Since inf cannot be represented in most common
> integer formats
Right. I'm new to NumPy so I figured I'd check if there was some nifty way of
preserving the shape without storing it in the database that I hadn't
discovered yet. No worries, I'll store the shape alongside the array. Thanks
for the reply.
Ian
>___
Hello list,
I am storing a multidimensional array as binary in a Postgres 9.04 database.
For retrieval of this array from the database I thought frombuffer() was my
solution, however I see that this constructs a one-dimensional array. I read in
the documentation about the buffer parameter in th
ole
thing into memory in that time, or is there some kind of delayed read
going on?
Ian
<>___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
heers,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
se arrays could be helpful if the data isn't
"dense" on your grid.
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Like most people these days, I have multiple 24" monitors. I don't need
"print" of ndarrays to wrap after 72 columns. Is there some way to
change this?
TIA
Ian
CURRENT:
[NaN NaN NaN NaN NaN 5.1882094
1.19646584]]
DES
On 12/22/10 9:16 AM, Ian Stokes-Rees wrote:
> What is the most efficient way to do the Matlab equivalent of nnz(M)
> (nnz = number-of-non-zeros function)?
Thanks to all the various responses. I should have mentioned that I'm
using scipy.sparse, and lil_matrix objects have a met
What is the most efficient way to do the Matlab equivalent of nnz(M)
(nnz = number-of-non-zeros function)?
I've tried Google, but no luck.
My assumption is that something like
a != 0
will be used, but I'm not sure then how to "count" the number of "
Yes, that's pretty much the situation. I'm mostly looking for someone
who has satisfactory performance with their Core i7 so I can get some
comparison information and figure out if I need to disable
hyperthreading or compile atlas with different flags or what.
Are the Ubuntu 10.10 atlas packages a
I'm wondering if anyone here has successfully built numpy with ATLAS
and a Core i7 CPU on Ubuntu 10.04. If so, I could really use your
help. I've been trying since August (see my earlier messages to this
list) to get numpy running at full speed on my machine with no luck.
The Ubuntu packages don't
wrappers, which seems to preclude (easily)
adding a "__getattr__(self,attr)" method to the class. If someone can
point me in the right direction, I'll keep looking into this, otherwise
I'm giving up and will just try and use recarray.
Ian
; That's why I said that __getattr__ would perhaps work better.
So do you want me to try out an implementation and supply a patch? If
so, where should I send the patch?
Ian
<>___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On 10/28/10 5:29 PM, Robert Kern wrote:
> On Thu, Oct 28, 2010 at 15:17, Ian Stokes-Rees
> wrote:
>> I have an ndarray with named dimensions. I find myself writing some
>> fairly laborious code with lots of square brackets and quotes. It seems
>> like it wouldn
#x27;OK'].b)
which is really a lot clearer. Is something like this already possible
somehow?
Is there some reason not to map __getattr__ to __getitem__?
class ndarray():
...
__getattr__ = __getitem__
Or something somewhat more sophisticated like:
def __getattr__
rg> wrote:
> On Wed, Oct 20, 2010 at 09:13:24AM -0400, Ian Goodfellow wrote:
> >--with-netlib-lapack is indeed no longer valid.
> >INSTALL.txt includes a warning that INSTALL.txt is out of date, you
> should
> >refer to doc/atlas_install.pdf instead.
>
>
--with-netlib-lapack is indeed no longer valid.
INSTALL.txt includes a warning that INSTALL.txt is out of date, you should
refer to doc/atlas_install.pdf instead.
The new option is --with-netlib-lapack-tarfile
I successfully built 3.9.25 with this option a while ago, but haven't been
able to get nu
To do a standard installation, run
sudo python setup.py install
from inside the numpy directory
Then your import should work elsewhere.
By the way, "import *" can cause difficulties when you're working with
several different files. For example, if you have a function called 'save'
somewhere that
If the arrays are the same size or can be broadcasted to the same
size, it returns true or false on an elementwise basis.
If the arrays are not the same size and can't be broadcasted to the
same size, it returns False, which was a surprise to me too.
>>> import numpy as N
>>> N.asarray([[0,1
The reasoning behind this is that == returns an array that specifies
whether each element of the two arrays is equal. It's only defined if
the arrays are the same shape (or maybe if they can be broadcasted to
the same shape).
The correct way to check if an array is empty is to inspect its .sh
usec per loop
not doing matrix multiply: 0.0131
On Fri, Oct 8, 2010 at 12:35 PM, Bruce Southey wrote:
> On 10/08/2010 10:58 AM, Ian Goodfellow wrote:
>
> Here's the output on my atlas library:
> file -L /usr/local/atlas/lib/libatlas.so
> /usr/local/atlas/lib/libatlas
,1,2])
A = rng.randn(1000,1000)
t1 = time.time(); x = N.dot(A,A); t2 = time.time()
print t2-t1
On Fri, Oct 8, 2010 at 11:49 AM, Bruce Southey wrote:
> On 10/08/2010 10:01 AM, Ian Goodfellow wrote:
>
> I'm using 64-bit Ubuntu 10.04. I originally tried building without site
= ['lapack', 'lapack', 'f77blas', 'cblas', 'atlas']
library_dirs = ['/usr/local/atlas/lib']
define_macros = [('ATLAS_INFO', '"\\"None\\""')]
language = f77
include_dirs = ['/usr/inc
Is there a log file I should post or do you just mean to capture everything
that setup.py prints to stdout?
On Fri, Oct 8, 2010 at 10:06 AM, Benjamin Root wrote:
> On Fri, Oct 8, 2010 at 8:47 AM, Ian Goodfellow
> wrote:
>
>> Can anyone explain how to get numpy to recognize atla
FAULT]
library_dirs = /usr/local/atlas/lib
include_dir = /usr/local/atlas/include
Thanks in advance,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
A background in linear algebra helps. I just came up with this method
(which, because I thought of it 5 seconds ago, I don't know if it works):
Line p1, p2
Point v
costheta = normalize(p2-p1) dot normalize(v-p1)
dist = length(v-p1)*sin(acos(costheta)
Hi,
Couldn't you do it with several sum steps? E.g.:
result = array.sum(axis=1).sum(axis=2)
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
You can use np.mgrid to construct a grid of coordinates. From there, you
can make your new array based on these coordinates however you like.
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo
Hi,
I've converted all of the code to use record arrays, for a 10-fold speed
boost. Thanks,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
,y_gradient)
angles = np.radians( (distances_to_center/((resolution/2.0)-0.5)) * 90.0
)
lambert_weights = np.cos(angles)
This seems like it could be improved.
Ian
PS, I meant "Sinusoidal" in the title. I carried that over into my
description, which should have been "cosine&quo
aving trouble because I don't know how to operate on an
array's values based on the index of the values themselves.
Help?
Thanks,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
How about numpy.set_printoptions(suppress=True)?
See
http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html
.
HTH,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy
r specifically, how to
keep them together during a sort).
What're my options?
Thanks again for the extremely valuable help,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Fri, Jul 23, 2010 at 12:13 AM, Jon Wright wrote:
> Ian Mallett wrote:
> >
> > To the second, actually, I need to increment the number of times the
> > index is there. For example, if b=[1,5,6,6,6,9], then a[6-1] would have
> > to be incremented by +3 = +1+1+1. I
the index
is there. For example, if b=[1,5,6,6,6,9], then a[6-1] would have to be
incremented by +3 = +1+1+1. I tried simply a[b-1]+=1, but it seems to only
increment values once, even if there are more than one occurrence of the
index in "b&
into "a".
#For each occurrence of a given index in "b", increment the corresponding
value in "a" by +1.
#How should I do that?
Simpler, and more importantly, better explained problem. Help?
Thanks again,
Ian
___
NumPy-Discu
to execute this process 8720 times when width and height are both
512. I have a feeling my code can be greatly improved in many ways--if not
all over, certainly in lines 5 through 9.
Help?
Thanks,
Ian
___
NumPy-Discussion mailing list
NumPy-Discu
ds for my computer. I imagine doing further
operations (like converting into a numpy array) on other Python objects
(like lists) might be similarly slow.
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/li
the z direction.
Each vec3 also moves towards a certain diagonal_distance from all vec3s
along a major diagonal.
I am trying to accomplish this. Any ideas?
Thanks,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/ma
hich would be more complicated. There
are a few strange errors I can't account for (like the object very slowly
drifts, as it rests on the ground).
At the moment, I'm wondering whether a physics engine has already been done
with NumPy before, and if not, whether anyone has any
sion=None, threshold=None, edgeitems=None,
linewidth=None, suppress=None, nanstr=None, infstr=None)
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
>>> import numpy
>>> A = numpy.array([[2,3],[10,12]])
>>> B = numpy.array([[1,4],[9,13]])
>>> C = numpy.array([A,B])
>>> numpy.min(C,0)
array([[ 1, 3],
[ 9, 12]])
Ian
___
NumPy-Discussion
On Sat, Mar 6, 2010 at 9:46 PM, David Goldsmith wrote:
> Thanks, Ian. I already figured out how to make it not so, but I still want
> to understand the design reasoning behind it being so in the first place
> (thus the use of the question "why (is it so)," not "how
>>> x = numpy.array(3)
>>> x
array(3)
>>> x.shape
()
>>> y = numpy.array([3])
>>> y
array([3])
>>> y.shape
(1,)
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
data: #v1,v2,v3,n
normal = object.transformed_normals[sublist][indices[3]]
v1,v2,v3 = [ object.transformed_vertices[sublist][indices[i]]
for i in xrange(3) ]
if
abs_angle_between_rad(normal,vec_subt(v1,lightpos))
Thanks,
Ian
_
on to ints on-the-fly, so maybe try
> same_edges.astype(numpy.int8).sum(axis = 0).
>
Actually, it's marginally slower :S
> Hope this gives some improvement. I attach the modified version.
>
> Ah, one thing to mention, have you not accidentally timed also the
> printo
(diff), axis = 0)
#0.026504089
comparison_diff = (diff_1 == diff_2)
#0.023019073
same_edges = comparison_sum * comparison_diff
#0.12848502
doublet_count = same_edges.sum(axis = 0)
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
, (where v*n* is a vertex *
index*), the edges [v1,v2], [v2,v3], and [v3,v1] need to be added to a
list. I.e., f_list needs to be converted into a list of edges in this way.
Then, duplicate edge pairs need to be removed, noting that [v1,v2] and
[v2,v1] are still a pair (in my Python code, I simply sorted the edges
before removing duplicates: [123,4] -> [4,123] and [56,968] -> [56,968]).
The final edge list then should be converted back into actual vertices by
indexing it into v_array (I think I understand how to do this now!): [
[1,2], [4,6], ... ] -> [ [[0.3,1.6,4.5],[9.1,4.7,7.7]],
[[0.4,5.5,8.3],[9.6,8.1,0.3]], ... ]
> It may seem a bit complicated, but I hope this impression is mainly
> because of the many outputs ...
>
> I double-checked everything, *hope* everything is correct.
> So far from me,
> Friedrich
>
Once again, thanks so much,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ady in the
set of edges (in the same or reversed form), then the edge is removed from
the set.
I may be able to do something with lines 14-17 myself--but I'm not sure.
If my code above can't be simplified using numpy, is there a way to
efficiently convert numpy arrays back to stand
gt; in place.
>
Oh . . . duh :D
Excellent--and a 3D rotation matrix is 3x3--so the list can remain n*3. Now
the question is how to apply a rotation matrix to the array of vec3?
> Chuck
>
Ian
___
NumPy-Discussion mailing list
NumPy-Disc
to length 4 (adding a 1 for the last term).
Then, the matrices would be applied. Then, the resulting n*4 array would be
converted back into a n*3 array (simply by dropping the last term).
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.
],[7,8,9],...] ->
[[1,2,3,1],[4,5,6,1],[7,8,9,1],...]. Each of these needs to be multiplied
by either a translation matrix, or a rotation and translation matrix.
I don't really know how to approach the problem . . .
Thanks,
Ian
___
NumPy-Di
able).
So, take the logarithm (to any base) of all the "conc" values. Then do a
linear regression on those values versus "sizes".
Try (semi-pseudocode):
slope, intercept, p, error = scipy.stats.linregress(sizes,log(conc))
Ian
___
r^2 value) and execute a back transform.
Then, to get your desired extrapolation, simply substitute in the size for
the independent variable to get the expected value.
If, however, you're looking for how to implement this in NumPy or SciPy, I
can't
It isn't, because "c" is the modified "a". Hence, "False".
Do you mean:
c = a
instead of:
c == a
...?
HTH,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Wed, Aug 5, 2009 at 11:34 AM, Charles R Harris wrote:
> It could be you could slip in a small mod that would do what you want.
I'll help, if you want. I'm good with GPUs, and I'd appreciate the
numerical power it would afford.
> The main problems with using GPUs were that CUDA was only avai
Awww, it's fun to be foolish on Fridays!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
talled, you can try compiling with MingW32, by passing "-c mingw32" to
setup.py.
who could give me some suggestions on how to configure f2py?
Thanks in advance.
ian
liu.ian0...@gmail.com
2009-07-30
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
1. 0.]
[ 0.6669 0. 0.]
[ 1. 0. 0.]
[ 0.6669 0.5 0.]
[ 1. 0.5 0.]
[ 0.6669 1. 0.]
[ 1. 1. 0.]]
Thanks,
Ian
hotel + flight that means I cannot attend. I'm sure others are
in the same boat these days.
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
The trickier part is figuring out how to securely distribute this (or
enable access to live webcasts), and to be sure all presenters are OK
with their work being recorded and redistributed in this way.\
The good news is that EVO is developed by a team at Caltech where the
conference is taking
Hello,
I have some code that makes vertex buffer object terrain. Because the setup
for this (a series of triangle strips) is a bit clunky, I just implemented
the tricky parts in Python.
The code works, but it's slow. How should I go about optimizing it?
Thanks,
Ian
size = [#int_some
These are all great algorithms, thanks for the help!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
en(lines),len(dtype)), dtype=dtype)
...
results[idx]= [score, rfac, codefull, code2, subset]
or indexing into the array:
results[:,0]
results[:,1]
results[:].take(0)
results[:][0] # this works, but doesn't return the desired first column
Any suggestions greatly appreciate
This showed up on numpy-discussion. I posted this morning and it hasn't
appeared yet.
Ian
Nadav Horesh wrote:
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discuss
anks. :D
vecs = numpy.random.standard_normal((size,size,3))
magnitudes = numpy.sqrt((vecs*vecs).sum(axis=-1))
vecs = vecs / magnitudes[...,numpy.newaxis]
randlen = numpy.random.random((size,size))
randlen = randlen ** (1.0/3.0)
randpoints = vecs*randlen[...,numpy.newaxis]
rgb = ((randpoints+1.0)/
.0
surface = pygame.surfarray.make_surface(rgb)
I think the arrays are being concatenated. Hence, visually, it doesn't look
right...
Ian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
@Stéfan: I thought of the first method. Let's hear the second approach.
@Gökhan: Yes. Toolbox is my metaphor for being able to do effects in
OpenGL. Point sprites are images mapped onto vertices, VBOs are *v*ertex *b
*uffer *o*bjects, that make stuff faster.
_
Presently, it's being rendered using point sprites/VBOs. It's supposed to
be for a game I'm working on, but it's a good effect to have in the toolbox
too :D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listin
distribution of *directions *on the sphere. The particles are too
closely centered around the center. What I want is a certain number of
particles arranged evenly throughout the sphere. How do I do that?
Thanks,
Ian
___
Numpy-discussion mailing list
N
As an off-topic solution, there's always the GPU to do the the particle
updating. With half decent optimization, I've gotten over a million
particles in *real-time*. You could presumably run several of these at the
same time to get as many particles as you want. Downside would be
ease-of-impleme
This works perfectly! Is there likewise a similar call for Numeric?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sounds like it would work, but unfortunately numpy was one of my dependency
constraints. I should have mentioned that.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
soften by column and by row.
Thanks,
Ian
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
1 - 100 of 129 matches
Mail list logo