: The list would be unreadable if everyone spammed at the bottom their
: email like Otis'. It's just bad form.
If you'd like to debate project policy on what is/isn't acceptible on any
of the Lucene mailing lists, please start a new thread on general@lucene
(the list that exists precisely for
We still disagree.
On Fri, Dec 16, 2011 at 12:29 PM, Jason Rutherglen <
jason.rutherg...@gmail.com> wrote:
> Ted,
>
> The list would be unreadable if everyone spammed at the bottom their
> email like Otis'. It's just bad form.
>
> Jason
>
> On Fri, Dec 16, 2011 at 12:00 PM, Ted Dunning
> wrote:
Ted,
The list would be unreadable if everyone spammed at the bottom their
email like Otis'. It's just bad form.
Jason
On Fri, Dec 16, 2011 at 12:00 PM, Ted Dunning wrote:
> Sounds like we disagree.
>
> On Fri, Dec 16, 2011 at 11:56 AM, Jason Rutherglen <
> jason.rutherg...@gmail.com> wrote:
>
Sounds like we disagree.
On Fri, Dec 16, 2011 at 11:56 AM, Jason Rutherglen <
jason.rutherg...@gmail.com> wrote:
> Ted,
>
> "...- FREE!" is stupid idiot spam. It's annoying and not suitable.
>
> On Fri, Dec 16, 2011 at 11:45 AM, Ted Dunning
> wrote:
> > I thought it was slightly clumsy, but it
Ted,
"...- FREE!" is stupid idiot spam. It's annoying and not suitable.
On Fri, Dec 16, 2011 at 11:45 AM, Ted Dunning wrote:
> I thought it was slightly clumsy, but it was informative. It seemed like a
> fine thing to say. Effectively it was "I/we have developed a tool that
> will help you so
I thought it was slightly clumsy, but it was informative. It seemed like a
fine thing to say. Effectively it was "I/we have developed a tool that
will help you solve your problem". That is responsive to the OP and it is
clear that it is a commercial deal.
On Fri, Dec 16, 2011 at 10:02 AM, Jason
Wow the shameless plugging of product (footer) has hit a new low Otis.
On Fri, Dec 16, 2011 at 7:32 AM, Otis Gospodnetic
wrote:
> Hi Yury,
>
> Not sure if this was already covered in this thread, but with N smaller cores
> on a single N-CPU-core box you could run N queries in parallel over small
Hi Yury,
Not sure if this was already covered in this thread, but with N smaller cores
on a single N-CPU-core box you could run N queries in parallel over smaller
indices, which may be faster than a single query going against a single big
index, depending on how many concurrent query requests t
) was invaluable!
Otis
Performance Monitoring SaaS for Solr -
http://sematext.com/spm/solr-performance-monitoring/index.html - FREE!
>
> From: Robert Stewart
>To: solr-user@lucene.apache.org
>Sent: Thursday, December 15, 2011 2:16 PM
&
> Sent: Thursday, December 15, 2011 2:16 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Core overhead
>
> On 12/15/2011 4:46 PM, Robert Petersen wrote:
> > Sure that is possible, but doesn't that defeat the purpose of
> sharding?
> > Why distribute acros
but I
never tried it out. Thanks for the heads up on that topic... :)
-Original Message-
From: Yury Kats [mailto:yuryk...@yahoo.com]
Sent: Thursday, December 15, 2011 2:16 PM
To: solr-user@lucene.apache.org
Subject: Re: Core overhead
On 12/15/2011 4:46 PM, Robert Petersen wrote:
> Sur
On 12/15/2011 4:46 PM, Robert Petersen wrote:
> Sure that is possible, but doesn't that defeat the purpose of sharding?
> Why distribute across one machine? Just keep all in one index in that
> case is my thought there...
To be able to scale w/o re-indexing. Also often referred to as "micro-shard
solr-user@lucene.apache.org
Subject: Re: Core overhead
On 12/15/2011 1:41 PM, Robert Petersen wrote:
> loading. Try it out, but make sure that the functionality you are
> actually looking for isn't sharding instead of multiple cores...
Yes, but the way to achieve sharding is to hav
On 12/15/2011 1:41 PM, Robert Petersen wrote:
> loading. Try it out, but make sure that the functionality you are
> actually looking for isn't sharding instead of multiple cores...
Yes, but the way to achieve sharding is to have multiple cores.
The question is then becomes -- how many cores (sh
One other thing I did not mention is GC pauses. If you have smaller
heap sizes, you would have less very long GC pauses, so that can be an
advantage having many cores (if cores are distributed into seperate
SOLR instances, as seperate processes). I think you can expect 1
second pause for each GB
It is true number of terms may be much more than N/10 (or even N for
each core), but it is the number of docs per term that will really
matter. So you can have N terms in each core but each term has 1/10
number of docs on avg.
2011/12/15 Yury Kats :
> On 12/15/2011 1:07 PM, Robert Stewart wrot
ilto:yuryk...@yahoo.com]
Sent: Thursday, December 15, 2011 10:31 AM
To: solr-user@lucene.apache.org
Subject: Re: Core overhead
On 12/15/2011 1:07 PM, Robert Stewart wrote:
> I think overall memory usage would be close to the same.
Is this really so? I suspect that the consumed memory is in direct
prop
On 12/15/2011 1:07 PM, Robert Stewart wrote:
> I think overall memory usage would be close to the same.
Is this really so? I suspect that the consumed memory is in direct
proportion to the number of terms in the index. I also suspect that
if I divided 1 core with N terms into 10 smaller cores, ea
I dont have any measured data, but here are my thoughts.
I think overall memory usage would be close to the same.
Speed will be slower in general, because if search speed is approx
log(n) then 10 * log(n/10) > log(n), and also if merging results you
have overhead in the merge step and also if fetc
19 matches
Mail list logo