You really can't tell until you prototype and measure. Here's a long
blog on why what you're asking, although a reasonable request,
is just about impossible to answer without prototyping and measuring.
http://searchhub.org/2012/07/23/sizing-hardware-in-the-abstract-why-we-dont-have-a-definitive-an
And how many machines running the SOLR ?
On 12 August 2014 22:12, Noble Paul wrote:
> The machines were 32GB ram boxes. You must do the RAM requirement
>
And how many machines running the SOLR ?
I expect that I will have to add more servers. What I am looking for is how
do I calculate how m
The machines were 32GB ram boxes. You must do the RAM requirement
calculation for your indexes . Just the no:of indexes alone won't be enough
to arrive at the RAM requirement
On Tue, Aug 12, 2014 at 6:59 PM, Ramprasad Padmanabhan <
ramprasad...@gmail.com> wrote:
> On 12 August 2014 18:18, Noble
Ramprasad Padmanabhan [ramprasad...@gmail.com] wrote:
> I have a single machine 16GB Ram with 16 cpu cores
Ah! I thought you had more machines, each with 16 Solr cores.
This changes a lot. 400 Solr cores of ~200MB ~= 80GB of data. You're aiming for
7 times that, so about 500GB of data. Running t
On 12 August 2014 18:18, Noble Paul wrote:
> Hi Ramprasad,
>
>
> I have used it in a cluster with millions of users (1 user per core) in
> legacy cloud mode .We used the on demand core loading feature where each
> Solr had 30,000 cores and at a time only 2000 cores were in memory. You are
> just
Hi Paul and Ramprasad,
I follow your discussion with interest as I will have more or less the
same requirement.
When you say that you use on demand core loading, are you talking about
LotsOfCore stuff?
Erick told me that it does not work very well in a distributed
environnement.
How do you han
Hi Ramprasad,
I have used it in a cluster with millions of users (1 user per core) in
legacy cloud mode .We used the on demand core loading feature where each
Solr had 30,000 cores and at a time only 2000 cores were in memory. You are
just hitting 400 and I don't see much of a problem . What is y
On Tue, 2014-08-12 at 14:14 +0200, Ramprasad Padmanabhan wrote:
> Sorry for missing information. My solr-cores take less than 200MB of
> disk
So ~3GB/server. If you do not have special heavy queries, high query
rate or heavy requirements for index availability, that really sounds
like you could p
Sorry for missing information. My solr-cores take less than 200MB of disk
What I am worried about is If I run too many cores from a single solr
machine there will be a limit to the number of concurrent searches it can
support. I am still benchmarking for this.
Also another major bottleneck I fin
On Tue, 2014-08-12 at 11:50 +0200, Ramprasad Padmanabhan wrote:
> Are there documented benchmarks with number of cores
> As of now I just have a test bed.
>
>
> We have 150 million records ( will go up to 1000 M ) , distributed in 400
> cores.
> A single machine 16GB RAM + 16 cores search is w
Are there documented benchmarks with number of cores
As of now I just have a test bed.
We have 150 million records ( will go up to 1000 M ) , distributed in 400
cores.
A single machine 16GB RAM + 16 cores search is working "fine"
But I still am not sure will this work fine in production
Obvio
I think this question is more aimed at design and performance of large
number of cores.
Also solr is designed to handle multiple cores effectively, however it
would be interesting to know If you have observed any performance problem
with growing number of cores, with number of nodes and solr versio
On Tue, 2014-08-12 at 08:40 +0200, Ramprasad Padmanabhan wrote:
> I need to store in SOLR all data of my clients mailing activitiy
>
> The data contains meta data like From;To:Date;Time:Subject etc
>
> I would easily have 1000 Million records every 2 months.
If standard searches are always insid
Hi Ramprasad,
You can certainly have a system with hundreds of cores. I know of more than
a few people who have done that successfully in their setups.
At the same time, I'd also recommend to you to have a look at SolrCloud.
SolrCloud takes away the operational pains like replication/recovery etc
14 matches
Mail list logo