On 11/13/07, Terry Reedy ([EMAIL PROTECTED]) wrote:
>"Scott SA" <[EMAIL PROTECTED]> wrote in message
>news:[EMAIL PROTECTED]
>| On 11/12/07, Scott SA ([EMAIL PROTECTED]) wrote:
>| I decided to test the speeds of the four methods:
>|
>|set_example
>|s = set()
>|for url in urls
En Mon, 12 Nov 2007 16:21:36 -0300, Scott SA <[EMAIL PROTECTED]> escribió:
> I decided to test the speeds of the four methods:
(but one should always check for correctness before checking speed)
> def dict_example(urls):
> d = {}
> for url in urls:
> if url in d:
> d[
"Scott SA" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
| On 11/12/07, Scott SA ([EMAIL PROTECTED]) wrote:
| I decided to test the speeds of the four methods:
|
|set_example
|s = set()
|for url in urls:
|if not url in s:
|s.add(url)
On 11/12/07, Scott SA ([EMAIL PROTECTED]) wrote:
Uhm sorry, there is a slightly cleaner way of running the second option I
presented (sorry for the second post).
>If you would find an index and count useful, you could do something like this:
>
>for idx in range(len(urls)):
>uniqu
On 11/12/07, Michel Albert ([EMAIL PROTECTED]) wrote:
>On Nov 9, 11:45 pm, Bruno Desthuilliers
><[EMAIL PROTECTED]> wrote:
>> [EMAIL PROTECTED] a Ècrit :
>>
>> > Hi,
>>
>> > I have to get list of URLs one by one and to find the URLs that I have
>> > more than one time(can't be more than twice).
>>
> Now, I can see that this method has some superfluous data (the `1`
> that is assigned to the dict). So I suppose this is less memory
> efficient. But is this slower then? As both implementations use hashes
> of the URL to access the data. Just asking out of curiosity ;)
Performance-wise, there i
On Nov 9, 11:45 pm, Bruno Desthuilliers
<[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] a écrit :
>
> > Hi,
>
> > I have to get list of URLs one by one and to find the URLs that I have
> > more than one time(can't be more than twice).
>
> > I thought to put them into binary search tree, this way the
[EMAIL PROTECTED] a écrit :
> Hi,
>
> I have to get list of URLs one by one and to find the URLs that I have
> more than one time(can't be more than twice).
>
> I thought to put them into binary search tree, this way they'll be
> sorted and I'll be able to check if the URL already exist.
What ab
On Nov 9, 4:06 pm, [EMAIL PROTECTED] wrote:
> Hi,
>
> I have to get list of URLs one by one and to find the URLs that I have
> more than one time(can't be more than twice).
>
> I thought to put them into binary search tree, this way they'll be
> sorted and I'll be able to check if the URL already e
What if someone wants to implement, say, Huffman compression? That requires
a binary tree and the ability to traverse the tree. I've been looking for
some sort of binary tree library as well, and I haven't had any luck.
On 11/9/07, Larry Bates <[EMAIL PROTECTED]> wrote:
>
> [EMAIL PROTECTED] wro
On 2007-11-09, Larry Bates <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
>> I have to get list of URLs one by one and to find the URLs
>> that I have more than one time(can't be more than twice).
>>
>> I thought to put them into binary search tree, this way
>> they'll be sorted and I'll be
[EMAIL PROTECTED] wrote:
> Hi,
>
> I have to get list of URLs one by one and to find the URLs that I have
> more than one time(can't be more than twice).
>
> I thought to put them into binary search tree, this way they'll be
> sorted and I'll be able to check if the URL already exist.
>
> Couldn
12 matches
Mail list logo