Hi,
A background in linear algebra helps. I just came up with this method
(which, because I thought of it 5 seconds ago, I don't know if it works):
Line p1, p2
Point v
costheta = normalize(p2-p1) dot normalize(v-p1)
dist = length(v-p1)*sin(acos(costheta)
Ian
___
Hi,
Couldn't you do it with several sum steps? E.g.:
result = array.sum(axis=1).sum(axis=2)
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
You can use np.mgrid to construct a grid of coordinates. From there, you
can make your new array based on these coordinates however you like.
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/nu
Hi,
I've converted all of the code to use record arrays, for a 10-fold speed
boost. Thanks,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
After much deliberation, I found a passable solution:
distances = np.abs(np.arange(0,resolution,1)+0.5-(resolution/2.0))
x_gradient = np.tile(distances,(resolution,1))
y_gradient = np.copy(x_gradient)
y_gradient = np.swapaxes(y_gradient,0,1)
distances_to_center = np.hypot(x_gradient,y_gradien
Hi,
So I have a square 2D array, and I want to fill the array with sine values.
The values need to be generated by their coordinates within the array.
The center of the array should be treated as the angle 90 degrees. Each of
the four edges should be 0 degrees. The corners, therefore, ought to
How about numpy.set_printoptions(suppress=True)?
See
http://docs.scipy.org/doc/numpy/reference/generated/numpy.set_printoptions.html
.
HTH,
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discu
Hi,
So working on the radiosity renderer:
http://a.imageshack.us/img186/2479/image2f.png.
The code now runs fast enough to generate the data required to draw that.
Now, I need to optimize the radiosity calculation, so that it will converge
in a reasonable amount of time. Right now, the individua
On Fri, Jul 23, 2010 at 12:13 AM, Jon Wright wrote:
> Ian Mallett wrote:
> >
> > To the second, actually, I need to increment the number of times the
> > index is there. For example, if b=[1,5,6,6,6,9], then a[6-1] would have
> > to be incremented by +3 = +1+1+1. I
On Thu, Jul 22, 2010 at 10:05 PM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
> Is that what you want, or do you just want to know how many unique indices
> there are? As to encoding the RGB, unless there is a existing program your
> best bet is probably to use a dot product, i.e., if pix
Hi again,
I've condensed the problem down a lot, because I both presented it in an
overcomplicated way, and did not explain it particularly well.
Condensed problem:
a = np.zeros(num_patches)
b = np.array(...) #created, and is size 512^512 = 262,144
#Each value in "b" is an index into "a".
#For ea
Hi,
So, I'm working on a radiosity renderer, and it's basically finished. I'm
now trying to optimize it. Currently, by far the most computationally
expensive operation is visibility testing, where pixels are counted by the
type of patch that was drawn on them. Here's my current code, which I'm
On Thu, Jul 22, 2010 at 2:09 PM, marco cammarata wrote:
> To convert the list into an array takes about 5 sec ...
>
Not too familiar with typical speeds, but at a guess, perhaps because it
must convert 61.4 million (640*480*200) values? Just to *count* that high
with xrange takes 1.6 seconds for
Hi,
So I have a 4D grid (x,y,z,vec3), where the vec3 represents a position.
What I want to do is to move vec3 elements of the grid based on surrounding
vec3 elements so that the grid's values overall are more orthogonal.
For example, consider the following 3 x 3 x 3 x vec3 grid:
-1.0,-1.0,-
Hi,
I'm working on various projects, I think it would be great to have a
built-in physics engine to my rendering library. I would rather not rely on
a large dedicated Physics library (PyOGRE, PyODE, etc.). I thought it might
be interesting to try to make a simple soft-body physics engine with Nu
On Wed, Apr 7, 2010 at 7:40 AM, ioannis syntychakis wrote:
> Hallo Everybody,
>
> I am new in this mail list and python. But I am working at something and I
> need your help.
>
> I have a very big matrix. What I want is to search in that matrix for
> values above the (for example:) 150. If there a
>>> import numpy
>>> A = numpy.array([[2,3],[10,12]])
>>> B = numpy.array([[1,4],[9,13]])
>>> C = numpy.array([A,B])
>>> numpy.min(C,0)
array([[ 1, 3],
[ 9, 12]])
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.
On Sat, Mar 6, 2010 at 9:46 PM, David Goldsmith wrote:
> Thanks, Ian. I already figured out how to make it not so, but I still want
> to understand the design reasoning behind it being so in the first place
> (thus the use of the question "why (is it so)," not "how (to make it
> different)").
>
W
>>> x = numpy.array(3)
>>> x
array(3)
>>> x.shape
()
>>> y = numpy.array([3])
>>> y
array([3])
>>> y.shape
(1,)
Ian
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Sat, Mar 6, 2010 at 12:03 PM, Friedrich Romstedt <
friedrichromst...@gmail.com> wrote:
> At the moment, I can do nothing about that. Seems that we have
> reached the limit. Anyhow, is it now faster than your Python list
> implementation, and if yes, how much? How large was your gain by
> usi
on to ints on-the-fly, so maybe try
> same_edges.astype(numpy.int8).sum(axis = 0).
>
Actually, it's marginally slower :S
> Hope this gives some improvement. I attach the modified version.
>
> Ah, one thing to mention, have you not accidentally timed also the
> printo
Cool--this works perfectly now :-)
Unfortunately, it's actually slower :P Most of the slowest part is in the
removing doubles section.
Some of the costliest calls:
#takes 0.04 seconds
inner = np.inner(ns, v1s - some_point)
#0.0840001106262
sum_1 = sum.reshape((len(sum), 1)).repeat(len(sum), ax
Firstly, I want to thank you for all the time and attention you've obviously
put into this code.
On Tue, Mar 2, 2010 at 12:27 AM, Friedrich Romstedt <
friedrichromst...@gmail.com> wrote:
> The loop I can replace by numpy operations:
>
> >>> v_array
> array([[1, 2, 3],
> [4, 5, 6],
> [
Excellent--this setup works perfectly! In the areas I was concentrating on,
the the speed increased an order of magnitude.
However, the overall speed seems to have dropped. I believe this may be
because the heavy indexing that follows on the result is slower in numpy.
Is this a correct analysis?
On Sun, Feb 28, 2010 at 6:54 PM, Charles R Harris wrote:
> Why not just add a vector to get translation? There is no need to go the
> homogeneous form. Or you can just leave the vectors at length 4 and use a
> slice to access the first three components. That way you can leave the ones
> in place.
On Sun, Feb 28, 2010 at 6:31 PM, Charles R Harris wrote:
> As I understand it, you want *different* matrices applied to each vector?
Nope--I need the same matrix applied to each vector.
Because 3D translation matrices must, if I understand correctly be 4x4, the
vectors must first be changed to
Hi,
I have a list of vec3 lists (e.g. [[1,2,3],[4,5,6],[7,8,9],...]). To every
single one of the vec3 sublists, I am currently applying transformations. I
need to optimize this with numpy.
To get proper results, as far as I can tell, the vec3 lists must be
expressed as vec4s: [[1,2,3],[4,5,6],[7
Hello,
My analysis shows that the exponential regression gives the best result
(r^2=87%)--power regression gives worse results (r^2=77%). Untransformed
data gives r^2=76%.
I don't think you want lognorm. If I'm not mistaken, that fits the data to
a log(normal distribution random variable).
So,
Theory wise:
-Do a linear regression on your data.
-Apply a logrithmic transform to your data's dependent variable, and do
another linear regression.
-Apply a logrithmic transform to your data's independent variable, and do
another linear regression.
-Take the best regression (highest r^2 value) an
On Sat, Oct 31, 2009 at 9:38 AM, Matthew Brett wrote:
> c = a.byteswap().newbyteorder()
> c == a
>
In the last two lines, a variable "c" is assigned to a modified "a". The
next line tests (==) to see if "c" is the same as (==) the unmodified "a".
It isn't, because "c" is the modified "a". Hence,
On Wed, Aug 5, 2009 at 11:34 AM, Charles R Harris wrote:
> It could be you could slip in a small mod that would do what you want.
I'll help, if you want. I'm good with GPUs, and I'd appreciate the
numerical power it would afford.
> The main problems with using GPUs were that CUDA was only avai
Awww, it's fun to be foolish on Fridays!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
Sorry, I've been away in Oregon...
The result isn't quite the same. The arrays must be in the range [0,1], so
I just have it divide x3 and y. I also have it add 1 to size[1], as I
realized that was also necessary for that behavior:
x = np.arange(size[0])
x2 = np.column_stack([x,x+1]).resha
Hello,
I have some code that makes vertex buffer object terrain. Because the setup
for this (a series of triangle strips) is a bit clunky, I just implemented
the tricky parts in Python.
The code works, but it's slow. How should I go about optimizing it?
Thanks,
Ian
size = [#int_something,#in
These are all great algorithms, thanks for the help!
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
That didn't fix it. I messed around some more, but I couldn't get the
spherical coordinates working. I decided to rework my first method. By
raising the radius to the one third power, like for the other method,
basically the same thing is accomplished. It's working now, thanks. :D
vecs = nump
Thanks for the example, Stéfan!
I'm trying to work this into a position texture, and the arrays need
interleaving. I tried a couple times. Here's what I have:
az = numpy.random.uniform(0, numpy.pi * 2, size*size)
el = numpy.random.uniform(0, numpy.pi, size*size)
r = numpy.random.uniform(size*si
@Stéfan: I thought of the first method. Let's hear the second approach.
@Gökhan: Yes. Toolbox is my metaphor for being able to do effects in
OpenGL. Point sprites are images mapped onto vertices, VBOs are *v*ertex *b
*uffer *o*bjects, that make stuff faster.
_
Presently, it's being rendered using point sprites/VBOs. It's supposed to
be for a game I'm working on, but it's a good effect to have in the toolbox
too :D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listin
Hello,
I'm trying to get a cloud particle system working. As part of determining
the depth of every point (to get the light that can pass through), it makes
sense that the volume should be of even density. The volume is a sphere.
Currently, I'm using:
vecs = numpy.random.standard_normal(size=(s
As an off-topic solution, there's always the GPU to do the the particle
updating. With half decent optimization, I've gotten over a million
particles in *real-time*. You could presumably run several of these at the
same time to get as many particles as you want. Downside would be
ease-of-impleme
This works perfectly! Is there likewise a similar call for Numeric?
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sounds like it would work, but unfortunately numpy was one of my dependency
constraints. I should have mentioned that.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hello,
I'm working on a program that will draw me a metallic 3D texture. I
successfully made a Perlin noise implementation and found that when the
result is blurred in one direction, the result actually looks somewhat like
brushed aluminum. The plan is to do this for every n*m*3 layer (2D textur
Most excellent solutions, thanks!
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
So I'm trying to get a certain sort of 3D terrain working in PyOpenGL. The
idea is to get vertex buffer objects to draw a simple 2D plane comprised of
many flat polygons, and use a vertex shader to deform that with a heightmap
and map that on a sphere.
I've managed to do this with a grid (si
It seems to be working now--I think my problem is elsewhere. Sorry...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
I have a NumPy array. The array is 3D, n x n x 3. I'm trying to flip the
first element of the last dimension with the last. I tried:
temp = myarray[:,:,0].copy()
myarray[:,:,0] = myarray[:,:,2].copy()
myarray[:,:,2] = temp
del temp
But it doesn't work as expected.
I'm definitely not very g
Hey, this looks cool! I may use it in the future. The problem has already
been solved, though, and I don't think changing it is necessary. I'd also
like to keep the dependencies (even packaged ones) to a minimum.
___
Numpy-discussion mailing list
Numpy
Thanks, but I don't want to make SciPy a dependency. NumPy is ok though.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Using http://mathworld.wolfram.com/Point-LineDistance3-Dimensional.html, I
came up with:
x0 = numpy.array(#point to collide with)
x1 = #objects' positions
x2 = #objects' previous positions
numerator = numpy.sqrt((numpy.cross((x0-x1),(x0-x2))**2).sum(1))
denominator = numpy.sqrt(((x2-x1)**2).sum(1))
It was in an error code somewhere. I fixed the problem by messing around
with it. I tried the following:
a = numpy.array([1, 2, 3])
print a
and it gave:
[1, 2, 3]
instead of:
array([1, 2, 3])
Then there were errors about it being a sequence instead of an array
somewhere else.
Ian
On Sat, Apr 25, 2009 at 1:26 PM, Ian Mallett wrote:
> I'm going to guess SciPy might be faster (?), but unfortunately it's not
> going to be available. Thanks, though.
>
___
Numpy-discussion mailing list
Numpy-discus
The problem is that the object moves too much between frames. A reasonable
bounding sphere is 1 for this purpose, but the objects move at least 3. So,
given the two arrays, one holding the objects' positions and the other their
previous positions, how can I find if, at some point between, the obj
Yes, this is pretty much what I'm doing. Right now, I'm having weird
troubles with the objects themselves; the objects should and do terminate
after a certain time, yet for some reason they're still being drawn. User
error, I'm sure.
Thanks,
Ian
___
Num
Hmmm, I played around with some other code, and it's working right now--not
sure what I did...
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Yeah, I ended up finding the [0] bit at the end through trial and error. I
actually do need the indices, though.
I'm having a strange new problem though.
numpy.array([1,2,3])
is returning a sequence??? I'm really confused.
Ian
___
Numpy-discussion mai
It would be:
numpy.where(array___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Well, if it will kill performance, I'm afraid I can't do that. Thanks
though.
I think it's working now. Now that I have the 1D array of distances, I need
the indices of those distances that are less than a number "d". what should
I do to do that?
Thanks,
Ian
___
Oops, one more thing. In reference to:
vec = array([[0,0,0],[0,1,0],[0,0,3]])
pos = array([0,4,0])
sqrt(((vec - pos)**2).sum(1)) -> array([ 4., 3., 5.])
Can I make "vec" an array of class instances? I tried:
class c:
def __init__(self):
self.position = [0,0,0]
vec = array([c(),c(),
On Sat, Apr 25, 2009 at 12:57 PM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
> In [3]: vec = array([[0,0,0],[0,1,0],[0,0,3]])
>
> In [4]: pos = array([0,4,0])
>
> In [5]: sqrt(((vec - pos)**2).sum(1))
> Out[5]: array([ 4., 3., 5.])
>
> Chuck
>
On Sat, Apr 25, 2009 at 1:00 PM, wrote:
Hi,
I have an array sized n*3. Each three-component is a 3D position. Given
another 3D position, how is the distance between it and every
three-component in the array found with NumPy?
So, for example, if the array is:
[[0,0,0],[0,1,0],[0,0,3]]
And the position is:
[0,4,0]
I need this array out
This seems to work:
vecs = Numeric.random.standard_normal(size=(self.size[0],self.size[1],3))
magnitudes = Numeric.sqrt((vecs*vecs).sum(axis=-1))
uvecs = vecs / magnitudes[...,Numeric.newaxis]
randlen = Numeric.random.random((self.size[0],self.size[1]))
randuvecs = uvecs*randlen[...,Numeric.newaxi
On Thu, Apr 9, 2009 at 11:46 PM, Robert Kern wrote:
> Parabolic? They should be spherical.
The particle system in the last screenshot was affected by gravity. In the
absence of gravity, the results should be spherical, yes. All the vectors
are a unit length, which produces a perfectly smooth s
It gives a perfect parabolic shape that looks very nice, but somewhat
unrealistic. I'd like to scale the unit vectors by a random length (which
can just be a uniform distribution). I tried scaling the unit vector n*n*3
array by a random n*n array, but that didn't work, obviously. Help?
_
This works; thank you.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hello,
With the help of this list, over the past two days, I have implemented a GPU
particle system (picture here:
http://img7.imageshack.us/img7/7589/image1olu.png). Every particle is
updated entirely on the GPU, with texture data (arrays) being updated
iteratively between two framebuffer object
Hello,
I want to make an array of size sqrt(n) by sqrt(n) by 3, filled with special
values.
The values range from 0.0 to 3.0, starting with 0.0 at one corner and ending
at 3.0 in the opposite, increasing going row by row. The value is to be
encoded in each color. Because this is somewhat abstra
Same. Thanks, too.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Thanks!
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
The array follows a pattern: each array of length 2 represents the x,y index
of that array within the larger array.
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Tue, Mar 31, 2009 at 5:32 PM, Robert Kern wrote:
> How do you want to fill in the array? If you are typing it in
> literally into your code, you would do basically the above, without
> the ...'s, and wrap it in numpy.array(...).
I know that, but in some cases, n will be quite large, perhaps 1
Hello,
I'm trying to make an array of size n*n*2. It should be of the form:
[[[0,0],[1,0],[2,0],[3,0],[4,0], ... ,[n,0]],
[[0,1],[1,1],[2,1],[3,1],[4,1], ... ,[n,1]],
[[0,2],[1,2],[2,2],[3,2],[4,2], ... ,[n,2]],
[[0,3],[1,3],[2,3],[3,3],[4,3], ... ,[n,3]],
[[0,4],[1,4],[2,4],[3,4],[4,4], ... ,
73 matches
Mail list logo