> And as you pointed out,
> most of the time for non-trivial datasets the numpy operations will be
> faster. (I'm daunted by the notion of trying to do linear algebra on
> lists of tuples, assuming that's the relevant set of operations given
> the comparison to the matrix class.)
Note the impo
> Thank you,
> No if the location ( space time or depth) of choice is not
> available then the function I was looking for should give an interpolated
> value at the choice.
> with best regards,
> Sudheer
scipy.ndimage.map_coordinates may be exactly what you want.
> - Orig
> You are right. I needed generic filter - to update current point, and not the
> neighbors as I wrote.
> Initial code is slow loop over 2D python lists, which I'm trying to convert
> to numpy and make it useful. In that loop there is inner loop for calculating
> neighbors properties, which conf
> I have 2D array, let's say: `np.random.random((100,100))` and I want to do
> simple manipulation on each point neighbors, like divide their values by 3.
>
> So for each array value, x, and it neighbors n:
>
> n n nn/3 n/3 n/3
> n x n -> n/3 x n/3
> n n nn/3 n/3 n/3
>
> I searched a
ach
> On Sun, Oct 14, 2012 at 8:24 PM, Zachary Pincus
> wrote:
>> It would be useful for the author of the PR to post a detailed comparison of
>> this functionality with scipy.ndimage.generic_filter, which appears to have
>> very similar functionality.
>>
>
> On 10.10.2012 15:42, Nathaniel Smith wrote:
>> This PR submitted a few months ago adds a substantial new API to numpy,
>> so it'd be great to get more review. No-one's replied yet, though...
>>
>> Any thoughts, anyone? Is it useful, could it be better...?
>
> Fast neighbor search is what scipy.
>> On Tue, Jun 5, 2012 at 8:41 PM, Zachary Pincus
>> wrote:
>>>
>>>> There is a fine line here. We do need to make people clean up lax code
>>>> in order to improve numpy, but hopefully we can keep the cleanups
>>>> reasonable.
>>&
> There is a fine line here. We do need to make people clean up lax code in
> order to improve numpy, but hopefully we can keep the cleanups reasonable.
Oh agreed. Somehow, though, I was surprised by this, even though I keep tabs on
the numpy lists -- at no point did it become clear that "big ch
> It isn't just the array() calls which end up getting problems. For
> example, in 1.5.x
>
> sage: f = 10; type(f)
>
> sage: numpy.arange(f)
> array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) #int64
>
> while in 1.6.x
>
> sage: numpy.arange(f)
> array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=object)
>
> We
> On Mon, May 14, 2012 at 4:33 PM, Zachary Pincus
> wrote:
>> The below seems to be a bug, but perhaps it's unavoidably part of the
>> indexing mechanism?
>>
>> It's easiest to show via example... note that using "[0,1]" to pull two
>>
Hello all,
The below seems to be a bug, but perhaps it's unavoidably part of the indexing
mechanism?
It's easiest to show via example... note that using "[0,1]" to pull two columns
out of the array gives the same shape as using ":2" in the simple case, but
when there's additional slicing happe
> Here's one way you could do it:
>
> In [43]: indices = [0,1,2,3,5,7,8,9,10,12,13,14]
>
> In [44]: jumps = where(diff(indices) != 1)[0] + 1
>
> In [45]: starts = hstack((0, jumps))
>
> In [46]: ends = hstack((jumps, len(indices)))
>
> In [47]: slices = [slice(start, end) for start, end in zip
> That all sounds like no option -- sad.
> Cython is no solution cause, all I want is to leave Python Syntax in
> favor for strong OOP design patterns.
What about ctypes?
For straight numerical work where sometimes all one needs to hand across the
python-to-C/C++/Fortran boundary is a pointer to
umpy.array(128,dtype=numpy.float64).data)
'\x00\x00\x00\x00\x00\x00`@'
str(numpy.array(128,dtype=numpy.float32).data)
'\x00\x00\x00C'
There's obviously no stride trick whereby one will "look" like the other.
Zach
>
> On Wed, Mar 21, 2012, at 11:19,
> Hi,
>
> Is it possible to have a view of a float64 array that is itself float32?
> So that:
>
A = np.arange(5, dtype='d')
A.view(dtype='f')
>
> would return a size 5 float32 array looking at A's data?
I think not. The memory layout of a 32-bit IEEE float is not a subset of that
of
How about the following?
exact: numpy.all(a == a[0])
inexact: numpy.allclose(a, a[0])
On Mar 5, 2012, at 2:19 PM, Keith Goodman wrote:
> On Mon, Mar 5, 2012 at 11:14 AM, Neal Becker wrote:
>> What is a simple, efficient way to determine if all elements in an array (in
>> my
>> case, 1D) are equ
> Thanks! That works great if I only want to search over one index but I
> can't quite figure out what to do with more than a single index. So
> suppose I have a labeled, multidimensional array with labels 'month',
> 'year' and 'quantity'. a[['month','year']] gives me an array of indices
> but "
cing. he is asking for how to index array by element
> value but not element index.
>
> 2012/1/30 Zachary Pincus
> a[x,y,:]
>
> Read the slicing part of the tutorial:
> http://www.scipy.org/Tentative_NumPy_Tutorial
> (section 1.6)
>
> And the documentation:
> http:/
a[x,y,:]
Read the slicing part of the tutorial:
http://www.scipy.org/Tentative_NumPy_Tutorial
(section 1.6)
And the documentation:
http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
On Jan 30, 2012, at 10:25 AM, Ted To wrote:
> Hi,
>
> Is there some straightforward way to access
> You have a million 32-bit floating point numbers that are in the
> thousands. Thus you are exceeding the 32-bitfloat precision and, if you
> can, you need to increase precision of the accumulator in np.mean() or
> change the input dtype:
a.mean(dtype=np.float32) # default and lacks precis
On Jan 24, 2012, at 1:33 PM, K.-Michael Aye wrote:
> I know I know, that's pretty outrageous to even suggest, but please
> bear with me, I am stumped as you may be:
>
> 2-D data file here:
> http://dl.dropbox.com/u/139035/data.npy
>
> Then:
> In [3]: data.mean()
> Out[3]: 3067.024383998
>
Hi Andrea,
scipy.ndimage.zoom will do this nicely for magnification. (Just set the spline
order to 0 to get nearest-neighbor interpolation; otherwise you can use higher
orders for better smoothing.)
For decimation (zooming out) scipy.ndimage.zoom also works, but it's not as
nice as a dedicated
> As an example, it'd be nice to have scipy.ndimage available without the GIL:
> http://docs.scipy.org/doc/scipy/reference/ndimage.html
>
> Now, this *can* easily be done as the core is written in C++. I'm just
> pointing out that some people may wish more for calling scipy.ndimage
> inside thei
I keep meaning to use matplotlib as well, but every time I try I also get
really turned off by the matlabish interface in the examples. I get that it's a
selling point for matlab refugees, but I find it counterintuitive in the same
way Christoph seems to.
I'm glad to hear the OO interface isn't
I think the remaining delta between the integer and float "boxcar" smoothing is
that the integer version (test 21) still uses median_filter(), while the float
one (test 22) is using uniform_filter(), which is a boxcar.
Other than that and the slow roll() implementation in numpy, things look pret
Hello Keith,
While I also echo Johann's points about the arbitrariness and non-utility of
benchmarking I'll briefly comment just on just a few tests to help out with
getting things into idiomatic python/numpy:
Tests 1 and 2 are fairly pointless (empty for loop and empty procedure) that
won't a
On Jun 21, 2011, at 1:16 PM, Charles R Harris wrote:
> It's because of the type conversion sum uses by default for greater precision.
Aah, makes sense. Thanks for the detailed explanations and timings!
___
NumPy-Discussion mailing list
NumPy-Discussion@
Hello all,
As a result of the "fast greyscale conversion" thread, I noticed an anomaly
with numpy.ndararray.sum(): summing along certain axes is much slower with
sum() than versus doing it explicitly, but only with integer dtypes and when
the size of the dtype is less than the machine word. I c
You could try:
src_mono = src_rgb.astype(float).sum(axis=-1) / 3.
But that speed does seem slow. Here are the relevant timings on my machine (a
recent MacBook Pro) for a 3.1-megapixel-size array:
In [16]: a = numpy.empty((2048, 1536, 3), dtype=numpy.uint8)
In [17]: timeit numpy.dot(a.astype(floa
On Apr 26, 2011, at 2:31 PM, Daniel Lepage wrote:
> You need PIL no matter what; scipy.misc.imread, scipy.ndimage.imread,
> and scikits.image.io.imread all call PIL.
scikits.image.io also has a ctypes wrapper for the freeimage library.
I prefer these (well, I wrote them), though apparently the
>
a, b, c = np.array([10]), np.array([2]), np.array([7])
min_val = np.minimum(a, b, c)
min_val
> array([2])
max_val = np.maximum(a, b, c)
max_val
> array([10])
min_val
> array([10])
>
> (I'm using numpy 1.4, and I observed the same behavior with numpy
> 2.0.0.dev8600 o
Here's a ctypes interface to FreeImage that I wrote a while back and
was since cleaned up (and maintained) by the scikits.image folk:
https://github.com/stefanv/scikits.image/blob/master/scikits/image/io/_plugins/freeimage_plugin.py
If it doesn't work out of the box on python 3, then it should
>
>>> In a 1-d array, find the first point where all subsequent points
>>> have values
>>> less than a threshold.
>
> This doesn't imply monotonicity.
> Suppose with have a sin curve, and I want to find the last trough. Or
> a business cycle and I want to find the last recession.
>
> Unless my en
>> This assumes monotonicity. Is that allowed?
>
> The twice-stated problem was:
[Note to avert email-miscommunications] BTW, I wasn't trying to snipe
at you with that comment, Josef...
I just meant to say that this solution solves the problem as Neal
posed it, though that might not be the ex
>> As before, the line below does what you said you need, though not
>> maximally efficiently. (Try it in an interpreter...) There may be
>> another way in numpy that doesn't rely on constructing the index
>> array, but this is the first thing that came to mind.
>>
>> last_greater = numpy.arange(ar
On Feb 9, 2011, at 10:58 AM, Neal Becker wrote:
> Zachary Pincus wrote:
>
>>>>> In a 1-d array, find the first point where all subsequent points
>>>>> have values
>>>>> less than a threshold, T.
>>>>
>>>> Maybe so
>>> In a 1-d array, find the first point where all subsequent points
>>> have values
>>> less than a threshold, T.
>>
>> Maybe something like:
>>
>> last_greater = numpy.arange(arr.shape)[arr >= T][-1]
>> first_lower = last_greater + 1
>>
>> There's probably a better way to do it, without the arang
> In a 1-d array, find the first point where all subsequent points
> have values
> less than a threshold, T.
Maybe something like:
last_greater = numpy.arange(arr.shape)[arr >= T][-1]
first_lower = last_greater + 1
There's probably a better way to do it, without the arange, though...
__
>>> I am using python for a while now and I have a requirement of
>>> creating a
>>> numpy array of microscopic tiff images ( this data is 3d, meaning
>>> there are
>>> 100 z slices of 512 X 512 pixels.) How can I create an array of
>>> images?
>>
>> It's quite straightforward to create a 3-d
> textlist = ["test1.txt", "test2.txt", "test3.txt"]
>
> for i in textlist:
> text_file = open(textlist, "a")
> text_file.write("\nI suck at Python and need help")
> text_file.close()
>
> But, this doesn't work. It gives me the error:
>
> coercing to Unicode: need string or buffe
> Thank you very much for the prompt response. I have already done
> what you
> have suggested, but there are a few cases where I do need to have an
> array
> named with a variable (looping through large numbers of unrelated
> files and
> calculations that need to be dumped into different an
> Is it possible to use a variable in an array name? I am looping
> through a
> bunch of calculations, and need to have each array as a separate
> entity.
> I'm pretty new to python and numpy, so forgive my ignorance. I'm
> sure there
> is a simple answer, but I can't seem to find it.
>
> l
def repeat(arr, num):
arr = numpy.asarray(arr)
return numpy.ndarray(arr.shape+(num,), dtype=arr.dtype,
buffer=arr, strides=arr.strides+(0,))
There are limits to what these sort of stride tricks can accomplish,
but repeating as above, or similar, is feasible.
On Jan 1, 2011, at 8:42
> mask = numpy.zeros(medical_image.shape, dtype="uint16")
> mask[ numpy.logical_and( medical_image >= lower, medical_image <=
> upper)] = 255
>
> Where lower and upper are the threshold bounds. Here I' m marking the
> array positions where medical_image is between the threshold bounds
> with 255,
On Nov 23, 2010, at 10:57 AM, Gael Varoquaux wrote:
> On Tue, Nov 23, 2010 at 04:33:00PM +0100, Sebastian Walter wrote:
>> At first glance it looks as if a relaxation is simply not possible:
>> either there are additional rows or not.
>> But with some technical transformations it is possible to r
>>> But wouldn't the performance hit only come when I use it in this
>>> way?
>>> __getattr__ is only called if the named attribute is *not* found (I
>>> guess it falls off the end of the case statement, or is the result
>>> of
>>> the attribute hash table "miss").
>> That's why I said that __g
> Help! I'm having a problem in searching through the *elements* if a
> 2d array. I have a loop over a numpy array:
>
> n,m = G.shape
> print n,m
> for i in xrange(n):
> for j in xrange(m):
> print type(G), type(G[i,j]), type(float(G[i,j]))
> g = float(
This is silly: the structure of the python language prevents
meaningful short-circuiting in the case of
np.any(a!=b)
While it's true that np.any itself may short-circuit, the 'a!=b'
statement itself will be evaluated in its entirety before the result
(a boolean array) is passed to np.any. Th
> Hi Robert,
> so in a big data analysis framework, that is essentially written in C
> ++,
> exposed to python with SWIG, plus dedicated python modules, the user
> performs an analysis choosing some given modules by name,as in :
> myOpt="foo"
> my_analyse.perform(use_optimizer=myOpt)
>
> The attri
> I'm trying to write an implementation of the amoeba function from
> numerical recipes and need to be able to pass a function name and
> parameter list to be called from within the amoeba function. Simply
> passing the name as a string doesn't work since python doesn't know it
> is a function and
Here's a good list of basic geometry algorithms:
http://www.softsurfer.com/algorithms.htm
Zach
On Oct 6, 2010, at 5:08 PM, Renato Fabbri wrote:
> supose you have a line defined by two points and a point. you want
> the distance
>
> what are easiest possibilities? i am doing it, but its nasty'
As str objects are supposed to be immutable, I think anything
"official" that makes a string from a numpy array is supposed to copy
the data. But I think you can use ctypes to wrap a pointer and a
length as a python string.
Zach
On Sep 27, 2010, at 8:28 AM, Francesc Alted wrote:
> Hi,
>
>
> I'm trying to do something ... unusual.
>
> gdb support scripting with Python. From within my python script, I can
> get the address of a contiguous area of memory that stores a fortran
> array. I want to creat a NumPy array using "frombuffer". I see that
> the CPython API supports the creation
>> Though, really, it's annoying that numpy.loadtxt needs both the
>> readline function *and* the iterator protocol. If it just used
>> iterators, you could do:
>>
>> def truncator(fh, delimiter='END'):
>> for line in fh:
>>if line.strip() == delimiter:
>> break
>>yield line
>>
>> num
On Sep 17, 2010, at 3:59 PM, Benjamin Root wrote:
> So, this code will still raise an error for an empty file.
> Personally, I consider that a bug because I would expect to receive
> an empty array. I could understand raising an error for a non-empty
> file that does not contain anything u
> Though, really, it's annoying that numpy.loadtxt needs both the
> readline function *and* the iterator protocol. If it just used
> iterators, you could do:
>
> def truncator(fh, delimiter='END'):
> for line in fh:
> if line.strip() == delimiter:
> break
> yield line
>
> numpy.load
>> In the end, the question was; is worth adding start= and stop=
>> markers
>> into loadtxt to allow grabbing sections of a file between two known
>> headers? I imagine it's something that people come up against
>> regularly.
Simple enough to wrap your file in a new file-like object that st
> indices = argsort(a1)
> ranks = zeros_like(indices)
> ranks[indices] = arange(len(indices))
Doesn't answer your original question directly, but I only recently
learned from this list that the following does the same as the above:
ranks = a1.argsort().argsort()
Will wonders never cease...
So d
o
the old one). Other slices won't have this property... A[:] = A[::-1]
e.g. will fail totally.
On Aug 4, 2010, at 11:52 AM, Zachary Pincus wrote:
>> Yes it is, but is there a way to do it in-place?
>
> So you want the first 25 elements of the array (in a flat "contiguou
4]])
In [41]: b.flags.c_contiguous
Out[41]: True
In [42]: b.flags.owndata
Out[42]: False
Zach
> On Wed, Aug 4, 2010 at 5:20 PM, Zachary Pincus > wrote:
> > A[:5,:5] shows the data I want, but it's not contiguous in memory.
> > A.resize(5,5) is contiguous, but do not cont
> A[:5,:5] shows the data I want, but it's not contiguous in memory.
> A.resize(5,5) is contiguous, but do not contains the data I want.
>
> How to get both efficiently?
A[:5,:5].copy()
will give you a new, contiguous array that has the same contents as
A[5:,5:], but in a new chunk of memory. Is
Hi Ionut,
Check out the "tabular" package:
http://parsemydata.com/tabular/index.html
It seems to be basically what you want... it does "pivot tables" (aka
crosstabulation), it's built on top of numpy, and has simple data IO
tools.
Also check out this discussion on "pivot tables" from the num
> match(v1, v2) => returns a boolean array of length len(v1) indicating
> whether element i in v1 is in v2.
You want numpy.in1d (and friends, probably, like numpy.unique and the
others that are all collected in numpy.lib.arraysetops...)
Definition: numpy.in1d(ar1, ar2, assume_unique=Fals
ach
On Jun 8, 2010, at 3:35 PM, Vincent Davis wrote:
> On Tue, Jun 8, 2010 at 12:26 PM, Zachary Pincus > wrote:
>>
>>> Failed again, I have attached the output including the execution of
>>> the above commands.
>>> Thanks for link to the environment vari
> Failed again, I have attached the output including the execution of
> the above commands.
> Thanks for link to the environment variables, I need to read that.
In the attached file (and the one from the next email too) I didn't
see the
MACOSX_DEPLOYMENT_TARGET=10.4
export MACOSX_DEPLOYMENT_TA
> On Tue, Jun 8, 2010 at 7:58 AM, Zachary Pincus > wrote:
>> This is unexpected, from the error log:
>>> /Library/Frameworks/Python.framework/Versions/3.1/include/python3.1/
>>> Python.h:11:20: error: limits.h: No such file or directory
>>
>> No good... i
This is unexpected, from the error log:
> /Library/Frameworks/Python.framework/Versions/3.1/include/python3.1/
> Python.h:11:20: error: limits.h: No such file or directory
No good... it can't find basic system headers. Perhaps it's due to the
MACOSX_DEPLOYMENT_TARGET environment variable that w
On Jun 7, 2010, at 5:19 PM, Vincent Davis wrote:
> Here is a link to the full output after typing python setup.py build.
> https://docs.google.com/Doc?docid=0AVQgwG2qUDgdZGYyaGo0NjNfMjI5Z3BraHd6ZDg&hl=en
that's just bringing up an empty document page for me...
___
Hi Vincent,
(1) Fortran compiler isn't necessary for numpy, but is for scipy,
which isn't ported to python 3 yet.
(2) Could you put up on pastebin or somewhere online the full error
you got?
The problem isn't one of not finding the Python.h header file, which
will be present in the Python 3
ure that there are no collisions would be pretty
trivial (just keep a table of the lat/long for each hash value, which
you'll need anyway, and check that different lat/long pairs don't get
assigned the same bin).
Zach
> -Mathew
>
> On Tue, Jun 1, 2010 at 1:49 PM, Zach
> Hi
> Can anyone think of a clever (non-lopping) solution to the following?
>
> A have a list of latitudes, a list of longitudes, and list of data
> values. All lists are the same length.
>
> I want to compute an average of data values for each lat/lon pair.
> e.g. if lat[1001] lon[1001] = la
> On Wed, Apr 14, 2010 at 10:25, Peter Shinners
> wrote:
>> Is there a way to combine two 1D arrays with the same size into a 2D
>> array? It seems like the internal pointers and strides could be
>> combined. My primary goal is to not make any copies of the data.
>
> There is absolutely no way t
> In an array I want to replace all NANs with some number say 100, I
> found a method nan_to_num but it only replaces with zero.
> Any solution for this?
Indexing with a mask is one approach here:
a[numpy.isnan(a)] = 100
also cf. numpy.isfinite as well in case you want the same with infs.
Zac
> You should open a ticket for this.
http://projects.scipy.org/numpy/ticket/1439
On Mar 26, 2010, at 11:26 AM, Charles R Harris wrote:
>
>
> On Wed, Mar 24, 2010 at 1:13 PM, Zachary Pincus > wrote:
> Hello,
>
> I assume it is a bug that calling numpy.array() on a
Hello,
I assume it is a bug that calling numpy.array() on a flatiter of a
fortran-strided array that owns its own data causes that array to be
rearranged somehow?
Not sure what happens with a fancier-strided array that also owns its
own data (because I'm not sure how to create one of those
> Is there a good way in NumPy to convert from a bit string to a boolean
> array?
>
> For example, if I have a 2-byte string s='\xfd\x32', I want to get a
> 16-length boolean array out of it.
numpy.unpackbits(numpy.fromstring('\xfd\x32', dtype=numpy.uint8))
___
> I'm having some trouble here. I have a list of numpy arrays. I
> want to
> know if an array 'u' is in the list.
Try:
any(numpy.all(u == l) for l in array_list)
standard caveats about float comparisons apply; perhaps
any(numpy.allclose(u, l) for l in array_list)
is more appropriate in certa
> Not to be a downer, but this problem is technically NP-complete. The
> so-called "knapsack problem" is to find a subset of a collection of
> numbers that adds up to the specified number, and it is NP-complete.
> Unfortunately, it is exactly what you need to do to find the indices
> to a particula
Hi all,
I'm curious as to what the most straightforward way is to convert an
offset into a memory buffer representing an arbitrarily strided array
into the nd index into that array. (Let's assume for simplicity that
each element is one byte...)
Does sorting the strides from largest to small
Hi,
Having just written some cython code to iterate a neighborhood across
an array, I have some ideas about features that would be useful for a
general frame. Specifically, being able to pass in a "footprint"
boolean array to define the neighborhood is really useful in many
contexts. Also
Check out this thread:
http://www.mail-archive.com/numpy-discuss...@lists.sourceforge.net/msg01154.html
In shot, it can be done, but it can be tricky to make sure you don't
leak memory. A better option if possible is to pre-allocate the array
with numpy and pass that buffer into the C code --
Unless I read your request or the documentation wrong, h5py already
supports pulling specific fields out of "compound data types":
http://h5py.alfven.org/docs-1.1/guide/hl.html#id3
> For compound data, you can specify multiple field names alongside
> the numeric slices:
> >>> dset["FieldA"]
>
> Wow. Once again, Apple makes using python unnecessarily difficult.
> Someone needs a whack with a clue bat.
Well, some tools from the operating system use numpy and other python
modules. And upgrading one of these modules might conceivably break
that dependency, leading to breakage in the O
Hello,
a < b < c (or any equivalent expression) is python syntactic sugar for
(a < b) and (b < c).
Now, for numpy arrays, a < b gives an array with boolean True or False
where the elements of a are less than those of b. So this gives us two
arrays that python now wants to "and" together. To
I believe that pretty generic connected-component finding is already
available with scipy.ndimage.label, as David suggested at the
beginning of the thread...
This function takes a binary array (e.g. zeros where the background
is, non-zero where foreground is) and outputs an array where each
> We have a need to to generate half-size version of RGB images as
> quickly
> as possible.
How good do these need to look? You could just throw away every other
pixel... image[::2, ::2].
Failing that, you could also try using ndimage's convolve routines to
run a 2x2 box filter over the ima
> Does numpy have functions to convert between e.g. an array of uint32
> and
> uint8, where the uint32 array is a packed version of the uint8 array
> (selecting little/big endian)?
You could use the ndarray constructor to look at the memory differently:
In : a = numpy.arange(240, 260, dtype=nu
Might want to look into masked arrays: numpy.ma.array.
a = numpy.array([1,5,4,99])
b = numpy.array([3,7,2,8])
arr = numpy.array([a, b])
masked = numpy.ma.array(arr, mask = arr==99)
masked.mean(axis=0)
masked_array(data = [2.0 6.0 3.0 8.0],
mask = [False False False False],
fi
You might want also to look into scipy.ndimage.zoom.
Zach
On Jul 9, 2009, at 9:42 AM, Thomas Hrabe wrote:
> Hi all,
>
> I am not a newbie to python and numpy, but however, I kinda do not
> find a proper solution for my interpolation problem without coding it
> explicitly myself.
>
> All I want t
scipy.ndimage.zoom (and related interpolation functions) would be a
good bet -- different orders of interpolation are available, too,
which can be useful.
Zach
On May 4, 2009, at 11:40 AM, Johannes Bauer wrote:
> Hello list,
>
> is there a possibility to scale an array by interpolation,
>
Hi Johannes,
According to http://www.pygtk.org/pygtk2reference/class-
gdkpixbuf.html , the pixels_array is a numeric python array (a
predecessor to numpy). The upshot is that perhaps the nice
broadcasting machinery will work fine:
pb_pixels[...] = fits_pixels[..., numpy.newaxis]
This might
Hi John,
First, did you build your own Python 2.6 or install from a binary?
When you type "python" at the command prompt, which python runs? (You
can find this out by running "which python" from the command line.)
Second, it appears that numpy is *already installed* for a non-apple
python 2
>> Does it work to use a cutoff of half the size of the input arrays in
>> each dimension? This is equivalent to calculating both shifts (the
>> positive and negative) and using whichever has a smaller absolute
>> value.
> no, unfortunately the cutoff is not half of the dimensions.
Explain more
> I have two 3D density maps (meaning volumetric data, each one
> essentially a IxJxK matrix containing real values that sum to one) and
> want to find translation of between the two that maximises
> correlation.
> This can be done by computing the correlation between the two
> (correlation theor
Hi Christian,
Check out this discussion from a little while ago on a very similar
issue (but in 3d):
http://www.nabble.com/Help-with-interpolating-missing-values-from-a-3D-scanner-td21489413.html
Most of the suggestions should be directly applicable.
Zach
On Apr 6, 2009, at 9:01 AM, Christia
Hi David,
Thanks again for bundling in the architecture-specification flag into
the numpy superpack installers: being able to choose sse vs. nosse
installs is really helpful to me, and from what I hear, many others as
well!
Anyhow, I just noticed (sorry I didn't see this before the release)
> did you have a look at OpenCV?
>
> http://sourceforge.net/projects/opencvlibrary
>
> Since a couple of weeks, we have implemented the numpy array
> interface so data exchange is easy [check out from SVN].
Oh fantastic! That is great news indeed.
Zach
__
Hi Stéfan,
>>> http://github.com/stefanv/bilateral.git
>>
>> Cool! Does this, out of curiosity, break things for you? (Or Nadav?)
>
> I wish I had some way to test. Do you maybe have a short example that
> I can convert to a test?
Here's my test case for basic working-ness (e.g. non exception-
> 2009/3/1 Zachary Pincus :
>> Dag, the cython person who seems to deal with the numpy stuff, had
>> this to say:
>>> - cimport and import are different things; you need both.
>>> - The "dimensions" field is in Cython renamed "shape" to be close
mpy version that you run?
>
> I am not very experienced with cython (I suppose that Stefan has
> some experience).
> As you said, probably the cython list is a better place to look for
> an answer. I would be happy to see how this issue resolved.
>
> Nadav
>
>
>
> --
1 - 100 of 208 matches
Mail list logo