On 1/10/2016 11:56 PM, Asanka Sanjaya Herath wrote:
> I tried to create a solr client using following code.
>
> solrClient = new CloudSolrClient(zkHost);
> solrClient.setDefaultCollection(solrCollection);
>
> Solr4j version:5.4.0
>
> Project built successfully but in run time I get followin
Hi,
I tried to create a solr client using following code.
solrClient = new CloudSolrClient(zkHost);
solrClient.setDefaultCollection(solrCollection);
Solr4j version:5.4.0
Project built successfully but in run time I get following error. Any
help is appreciated.
Main class [org.apache.oozi
Hello,
I am using the spellcheck component for spelling suggestions and I've used
the same configurations in two separate projects, the only difference is
one project uses a single core and the other is a collection on SolrCloud
with three shards. The single core has about 56K docs and the one on
On 1/10/2016 12:00 PM, Robert Brown wrote:
> I'm thinking more about how the external load-balancer will know if a
> node is down, as to take it out the pool of active servers to even
> attempt sending a query to.
>
> I could ping tho that just means the IP is alive. I could configure the
> load-
On 1/10/2016 2:29 AM, Allan Kamau wrote:
> We are able to load several cores into Solr 5.3.1.
> The problem is that after a restart of the server, these cores seem to get
> deleted.
> Is there a way to make cores loaded in Solr 5.x survive a server restart.
> Could there be a setting in solr.xml or
I'm thinking more about how the external load-balancer will know if a
node is down, as to take it out the pool of active servers to even
attempt sending a query to.
I could ping tho that just means the IP is alive. I could configure the
load-balancer to actually try a query, but this may be (
This is really confusing. You say:
bq: Basically I am going with master slave approach
OK, classic Solr master/slave? Or are you using this
in a different context?
bq: the application pushing
data to master will need to preview the search and if the search is deemed
useful/appropriate I need the
bq: Well, a good reason would be if you want your system to
continue to operate if 2 ZK nodes lose communication with
the rest of the cluster or go down completely
My argument is usually that if you are losing 2 of 3 ZK nodes
at the same time with any regularity, you probably have
problems that wo
For health checks, you can go ahead and get the real IP addresses and
ping them directly if you care to Or just let Zookeeper do that
for you. One of the tasks of Zookeeper is pinging all the machines
with all the replicas and, if any of them are unreachable, telling the
rest of the cluster tha
The key here is whether you are connecting to the same
Zookeeper, an internal or an external one. So if you
use the -c option without providing a -z option, you use
the embedded Zookeeper. If you later start with a -z
option, that's a _different_ zookeeper.
And, btw, Zookeeper defaults to keepin
Let me investigate once more how the cores we have in our deployment were
created and I will provide an update as well as a better problem
description.
Allan.
On Sun, Jan 10, 2016 at 3:14 PM, Alexandre Rafalovitch
wrote:
> Did you by any chance start the first-time with bin/solr start -e
>
> An
Thanks Erick,
For the health-checks on the load-balancer side, would you recommend a
simple query, or is there a reliable ping or similar for this scenario?
Cheers,
Rob
On 09/01/16 23:44, Erick Erickson wrote:
bq: is it best/good to get the CLUSTERSTATUS via the collection API
and explicitl
Did you by any chance start the first-time with bin/solr start -e
And then bin/solr restart?
In that case, the solr home was not set after restart and needs to be
passed in manually.
Or some other unexpected solr.home situation.
Just poking in the dark here.
Regards,
Alex
On 10 Jan 2016 8:2
What do you mean by cores getting deleted? Files created on filesystem for
these cores disappear?
How are you starting and stopping solr? Is this solr cloud or standalone
mode?
On Sun, Jan 10, 2016 at 2:59 PM, Allan Kamau wrote:
> We are able to load several cores into Solr 5.3.1.
> The problem
We are able to load several cores into Solr 5.3.1.
The problem is that after a restart of the server, these cores seem to get
deleted.
Is there a way to make cores loaded in Solr 5.x survive a server restart.
Could there be a setting in solr.xml or perhaps the "core.properties" files
that would ena
15 matches
Mail list logo