Hello everyone,
I have a bit of code where I am using rpy2 to import R phyper so I can
perform an hypergeometric test. Unfortunately our cluster does not have a
functional installation of rpy2 working. So I am wondering if I could
translate to scipy which would make the code completly independent
]])
I couldn't came up with a better solution.
Thank you in advance,
Bruno
2010/2/25 Robert Kern
> On Thu, Feb 25, 2010 at 07:51, Bruno Santos wrote:
> > I just realized that the line lsPhasedValues =
> > numpy.unique1d(aLoci[numpy.where(aLoci[index_nSize]>0)]) does not wor
I just realized that the line lsPhasedValues =
numpy.unique1d(aLoci[numpy.where(aLoci[index_nSize]>0)]) does not work
properly.
How can I get the unique values of an array based on their indexes?
2010/2/25 Bruno Santos
> After implementation all the possibilities we discuss yesterday mi f
With this I was able to speed up my code in a afternoon more than in the two
previous weeks. I don't have enough words to thank you.
All the best,
Bruno
2010/2/24 Robert Kern
> On Wed, Feb 24, 2010 at 12:38, Bruno Santos wrote:
> > This is probably me just being stupid. But what
aLoci[index_nSize]
print lsPhasedValues==lsPhasedValues1,lsPhasedValues,lsPhasedValues1
[0 0 6 0 0 3]
False set([3, 6]) (array([2, 5]),)
2010/2/24 Bruno Santos
>
>
> 2010/2/24 Chris Colbert
>
> In [4]: %timeit a = np.random.randint(0, 20, 100)
>> 10 loops, best of 3: 4
27;t scale very well.
>
> On Wed, Feb 24, 2010 at 12:50 PM, Bruno Santos wrote:
>
>> In both versions your lsPhasedValues contains the number of positions in
>> the array that match a certain criteria. What I need in that step is the
>> unique values and not their pos
In both versions your lsPhasedValues contains the number of positions in the
array that match a certain criteria. What I need in that step is the unique
values and not their positions.
2010/2/24 Robert Kern
> On Wed, Feb 24, 2010 at 11:19, Bruno Santos wrote:
> > It seems that the pyt
o
2010/2/24 Robert Kern
> On Wed, Feb 24, 2010 at 10:40, Bruno Santos wrote:
> > Funny. Which version of python are you using?
>
> Python 2.5.4 on OS X.
>
> --
> Robert Kern
>
> "I have come to believe that the whole world is an enigma, a harmless
> en
very much for your help I will try to change my code to
replace my for loop. I might need to come back to the mailing list if a run
into problems in the future.
All the best,
Bruno
2010/2/24 Robert Kern
> On Wed, Feb 24, 2010 at 10:21, Bruno Santos wrote:
>
> >> The idi
>
>
>
> The idiomatic way of doing this for numpy arrays would be:
>
> def test2(arrx):
>return (arrx >= 10).sum()
>
> Even this versions takes more time to run than my original python version
> with arrays.
>>> def test3(listx):
... return (listx>=10).sum()
>>> t = timeit.Timer("test3(l
Hello everyone,
I am using numpy arrays whenever I demand performance from my
algorithms. Nevertheless, I am having a performance issue at the moment
mainly because I am iterating several times over numpy arrays. Fot that
reason I decided to use timeit to see the performance of different versions
his step?
2007/4/18, Christian K. <[EMAIL PROTECTED]>:
Bruno Santos wrote:
> I try to use the expression as you said, but I'm not getting the desired
> result,
> My text file look like this:
>
> # num rows=115 num columns=2634
> AbassiM.txt 0.033023 0.033023 0
3, Charles R Harris <[EMAIL PROTECTED]>:
On 4/13/07, Bruno Santos <[EMAIL PROTECTED]> wrote:
>
> Dear Sirs,
> I'm trying to use Numpy to solve a speed problem with Python, I need to
> perform agglomerative clustering as a first step to k-means clustering.
> My prob
Dear Sirs,
I'm trying to use Numpy to solve a speed problem with Python, I need to
perform agglomerative clustering as a first step to k-means clustering.
My problem is that I'm using a very large list in Pyhton and the script is
taking more than 9minutes to process all the information, so I'm try
14 matches
Mail list logo