Here's a seed for your function:
s = 'ThesampletextthatcouldbereadedthesameinbothordersArozaupalanalapuazorA'
f = np.array(list(s)).view('int8').astype(float)
f -= f.mean()
maybe_here = np.argmax(np.convolve(f,f))/2
magic = 10
print s[maybe_here - magic:maybe_here + magic + 1]
Let us now how to
There are various ways to repack the pair of arrays into one array.
The most universal is probably to use structured array (can repack more
than a pair):
x = np.array(zip(a, b), dtype=[('a',int), ('b',int)])
After repacking you can use unique and other numpy methods:
xu = np.unique(x)
zip(xu['a'
rt_point+length_data]) /= factor. and that for
> every star_point and length data.
>
> How to do this fast?
>
> Cheers
>Wolfgang
> On 2012-05-31, at 1:43 AM, Val Kalatsky wrote:
>
> What do you mean by "normalized it"?
> Could you give the output of your procedure
What do you mean by "normalized it"?
Could you give the output of your procedure for the sample input data.
Val
On Thu, May 31, 2012 at 12:36 AM, Wolfgang Kerzendorf wrote:
> Dear all,
>
> I have an ndarray which consists of many arrays stacked behind each other
> (only conceptually, in truth it
Confirmed on Ubuntu, np.__version__ 1.5.1 and 1.6.1 (backtraces are
bellow).
Something seems to be broken before it comes to memcpy
and/or _aligned_contig_to_strided_size1.
Val
-
np.__version__ 1.6.1
Program received signal SIGSEGV, Se
You'll need some patience to get non-zeros, especially for k=1e-5
In [84]: np.sum(np.random.gamma(1e-5,size=100)!=0.0)
Out[84]: 7259
that's less than 1%. For k=1e-4 it's ~7%
Val
On Mon, May 28, 2012 at 10:33 PM, Uri Laserson wrote:
> I am trying to sample from a Dirichlet distribution, whe
Hi Tod,
Would you consider bundling the quaternion dtype with your package.
I think everybody wins: your package would become stronger and
Martin's dtype would become easily available.
Thanks
Val
On Sat, May 5, 2012 at 6:27 AM, Tom Aldcroft
wrote:
> On Fri, May 4, 2012 at 11:44 PM, Ilan Schnell
The only slicing short-cut I can think of is the Ellipsis object, but it's
not going to help you much here.
The alternatives that come to my mind are (1) manipulation of shape
directly and (2) building a string and running eval on it.
Your solution is better than (1), and (2) is a horrible hack, so
in the eigenvector matrix to real positive numbers,
> that's why the numpy solutions looks neat.
> Val
>
> PS Probably nobody cares to know, but the phase factor I gave in my 1st
> email should be negated:
> 0.99887305445887753+0.047461785427773337j
>
> On Mon, Apr
res to know, but the phase factor I gave in my 1st
email should be negated:
0.99887305445887753+0.047461785427773337j
On Mon, Apr 2, 2012 at 8:53 PM, Matthew Brett wrote:
> Hi,
>
> On Mon, Apr 2, 2012 at 5:38 PM, Val Kalatsky wrote:
> > Both results are correct.
> > There a
Both results are correct.
There are 2 factors that make the results look different:
1) The order: the 2nd eigenvector of the numpy solution corresponds to the
1st eigenvector of your solution,
note that the vectors are written in columns.
2) The phase: an eigenvector can be multiplied by an arbitra
Will this do what you need to accomplish?
import datetime
np.array([(datetime.datetime.strptime(i[0], "%Y-%m-%d").date(), i[1]) for i
in a], dtype=[('date', 'object'), ('count', 'int')])
Val
On Wed, Mar 21, 2012 at 11:48 PM, Yan Tang wrote:
> Hi,
>
> I am really confused on the np array or rec
769313486e+308
nexp =11 min=-max
-
On Thu, Mar 15, 2012 at 11:38 PM, Matthew Brett wrote:
> Hi,
>
> On Thu, Mar 15, 2012 at 9:33 PM, Val Kalatsky wrote:
> >
> > I just happened to have an xp6
I just happened to have an xp64 VM running:
My version of numpy (1.6.1) does not have float128 (see more below what I
get in ipython session).
If you need to test something else please let me know.
Val
---
Enthought Python Distribution -- www.enthought.com
Python 2.7.2 |EPD 7.2-2 (64-bit)| (defau
Can you?
The question should be: Why sympy does not have Fresnel integrals?
On Sun, Mar 11, 2012 at 1:06 AM, aa wrote:
> why sympy cannot integrate sin(x**2)??
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/m
0010008acce PyEval_EvalCodeEx +
> 1803
> 22 org.python.python 0x00010008ad61 PyEval_EvalCode + 54
> 23 org.python.python 0x0001000a265a Py_CompileString + 78
> 24 org.python.python 0x0001000a2723 PyRun_FileExFlags +
> 150
> 25 org.pyth
]: ua.unit
Out[4]: 'liter'
On Wed, Mar 7, 2012 at 7:15 PM, Val Kalatsky wrote:
>
> Seeing the backtrace would be helpful.
> Can you do whatever leads to the segfault
> from python run from gdb?
> Val
>
>
> On Wed, Mar 7, 2012 at 7:04 PM, Christoph
Seeing the backtrace would be helpful.
Can you do whatever leads to the segfault
from python run from gdb?
Val
On Wed, Mar 7, 2012 at 7:04 PM, Christoph Gohle
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>
> I have been struggeling for quite some time now. Desperate as I am, n
Viewness is in the eyes of the beholder.
You have to use indirect methods to figure it out.
Probably the most robust approach is to go up the base chain until you get
None.
In [71]: c1=np.arange(16)
In [72]: c2=c1[::2]
In [73]: c4=c2[::2]
In [74]: c8=c4[::2]
In [75]: id(c8.base)==id(c4)
Out[75]: T
Hi Slava,
Since your k is only 10, here is a quickie:
import numpy as np
arr = np.arange(n)
for i in range(k):
np.random.shuffle(arr)
print np.sort(arr[:p])
If your ever get non-unique entries in a set of k=10 for your n and p,
consider yourself lucky:)
Val
On Mon, Feb 20, 2012 at 10:35
Hi Bill,
Looks like you are running a very fresh version of numpy.
Without knowing the build version and what's going on in the extension
module I can't tell you much.
The usual suspects would be:
1) Numpy bug, not too likely.
2) Incorrect use of PyArray_FromObject, you'll need to send more info.
Aronne made good suggestions.
Here is another weapon for your arsenal:
1) I assume that the shape of your array is irrelevant (reshape if needed)
2) Depending on the structure of your data np.unique can be handy:
arr_unique, idx = np.unique(arr1d, return_inverse=True)
then search arr_unique instead
To avoid all the hassle I suggest getting EPD:
http://enthought.com/products/epd.php
You'd get way more than just NumPy, which may or may not be what you need.
I have installed various NumPy's on linux only and from source only which
did
require compilation (gcc), so I am not a good help for your s
I believe there are no provisions made for that in ndarray.
But you can subclass ndarray.
Val
On Wed, Jan 25, 2012 at 12:10 PM, Emmanuel Mayssat wrote:
> Is there a way to store metadata for an array?
> For example, date the samples were collected, name of the operator, etc.
>
> Regards,
> --
> E
Just what Bruce said.
You can run the following to confirm:
np.mean(data - data.mean())
If for some reason you do not want to convert to float64 you can add the
result of the previous line to the "bad" mean:
bad_mean = data.mean()
good_mean = bad_mean + np.mean(data - bad_mean)
Val
On Tue, Jan
A - np.digitize(A, S)
Should do the trick, just make sure that S is sorted and A and S do not
overlap,
if they do remove those items from A using set operations.
Val
On Tue, Jan 10, 2012 at 2:14 PM, Mads Ipsen wrote:
> **
> Hi,
>
> Suppose you have N items, say N = 10.
>
> Now a subset of these
callback or not. This has to
> be evaluated quite a lot.
>
> Oh well ... and 1.3.0 is pretty old :-)
>
> cheers,
> Samuel
>
> On 31.12.2011, at 07:48, Val Kalatsky wrote:
>
> >
> > Hi folks,
> >
> > First post, may not follow the standards, plea
OnFail: the resolution took place and did not succeed, the user is given a
chance to fix it.
In most of the case these callbacks are NULLs.
I could patch numpy with a generic method that does it, but it's a shame
not to use the good ufunc machine.
Thanks for tips and suggestions.
Val Kal
28 matches
Mail list logo