On 12/26/2013 02:29 PM, Shawn Heisey wrote:
On 12/24/2013 8:35 AM, David Santamauro wrote:
You may have one or more of the SolrCloud 'bootstrap' options on the
startup commandline. The bootstrap options are intended to be used
once, in order to bootstrap from a non-SolrCloud setup to a SolrClou
On 12/24/2013 8:35 AM, David Santamauro wrote:
>> You may have one or more of the SolrCloud 'bootstrap' options on the
>> startup commandline. The bootstrap options are intended to be used
>> once, in order to bootstrap from a non-SolrCloud setup to a SolrCloud
>> setup.
>
> No, no unnecessary op
On 12/23/2013 05:43 PM, Greg Preston wrote:
I believe you can just define multiple cores:
...
(this is the old style solr.xml. I don't know how to do it in the newer style)
Yes, that is exactly what I did but somehow, the link between shards and
collections gets lost and everything gets
On 12/23/2013 08:42 PM, Shawn Heisey wrote:
On 12/23/2013 12:23 PM, David Santamauro wrote:
I managed to create 8 new cores and the Solr Admin cloud page showed
them wonderfully as active replicas.
The only issue I have is what goes into solr.xml (I'm using tomcat)?
Putting
for each of th
On 12/23/2013 12:23 PM, David Santamauro wrote:
> I managed to create 8 new cores and the Solr Admin cloud page showed
> them wonderfully as active replicas.
>
> The only issue I have is what goes into solr.xml (I'm using tomcat)?
>
> Putting
>
>
> for each of the new cores I created seemed l
I believe you can just define multiple cores:
...
(this is the old style solr.xml. I don't know how to do it in the newer style)
Also, make sure you don't define a non-relative in
solrconfig.xml, or you may run into issues with cores trying to use
the same data dir.
-Greg
On Mon, Dec
On 12/23/2013 05:03 PM, Greg Preston wrote:
Yes, I'm well aware of the performance implications, many of which are
mitigated by 2TB of SSD and 512GB RAM
I've got a very similar setup in production. 2TB SSD, 256G RAM (128G
heaps), and 1 - 1.5 TB of index per node. We're in the process of
spli
>Yes, I'm well aware of the performance implications, many of which are
>mitigated by 2TB of SSD and 512GB RAM
I've got a very similar setup in production. 2TB SSD, 256G RAM (128G
heaps), and 1 - 1.5 TB of index per node. We're in the process of
splitting that to multiple JVMs per host. GC pau
Shawn,
I managed to create 8 new cores and the Solr Admin cloud page showed
them wonderfully as active replicas.
The only issue I have is what goes into solr.xml (I'm using tomcat)?
Putting
for each of the new cores I created seemed like the reasonable approach
but when I tested a tomca
On 12/22/2013 09:48 PM, Shawn Heisey wrote:
On 12/22/2013 2:10 PM, David Santamauro wrote:
My goal is to have a redundant copy of all 8 currently running, but
non-redundant shards. This setup (8 nodes with no replicas) was a test
and it has proven quite functional from a performance perspective.
On 12/22/2013 2:10 PM, David Santamauro wrote:
> My goal is to have a redundant copy of all 8 currently running, but
> non-redundant shards. This setup (8 nodes with no replicas) was a test
> and it has proven quite functional from a performance perspective.
> Loading, though, takes almost 3 weeks
Thanks for the reply.
My goal is to have a redundant copy of all 8 currently running, but
non-redundant shards. This setup (8 nodes with no replicas) was a test
and it has proven quite functional from a performance perspective.
Loading, though, takes almost 3 weeks so I'm really not in a posi
Hi David;
When you start up 8 nodes within that machine they will be replicas of each
shards and you will accomplish what you want. However if you can give more
detail about your hardware infrastructure and needs I can offer you a
design.
Thanks;
Furkan KAMACI
22 Aralık 2013 Pazar tarihinde Dav
any hint?
On 12/22/2013 06:48 AM, David Santamauro wrote:
Hi,
I have an 8-node setup currently with 1 shard per node (no redundancy).
These 8 nodes are smaller machines not capable of supporting the entire
collection..
I have another machine resource that can act as other node and this last
Hi,
I have an 8-node setup currently with 1 shard per node (no redundancy).
These 8 nodes are smaller machines not capable of supporting the entire
collection..
I have another machine resource that can act as other node and this last
node is capable of holding the entire collection. I'd lik
15 matches
Mail list logo