Hi all,
I'm having a situation where our SolrCloud cluster often gets into a bad
state where our solr nodes frequently respond with "no servers hosting
shard" even though the node that hosts that shard is clearly up. We
suspect that this is a state bug where some servers are somehow ending up
wit
I sometimes see the following in my logs:
ERROR org.apache.solr.core.SolrCore –
org.apache.lucene.queryparser.surround.query.TooManyBasicQueries: Exceeded
maximum of 1000 basic queries.
What does this mean? Does this mean that we have issued a query with too
many terms? Or that the number of
gt; to hit that limit?
>
>
> —
> Erik Hatcher, Senior Solutions Architect
> http://www.lucidworks.com
>
>
>
>
> > On Mar 13, 2015, at 9:44 AM, Ian Rose wrote:
> >
> > I sometimes see the following in my logs:
> >
> > ERROR org.apache.solr.core.So
Hi all -
I'm sure this topic has been covered before but I was unable to find any
clear references online or in the mailing list.
Are there any rules of thumb for how many cores (aka shards, since I am
using SolrCloud) is "too many" for one machine? I realize there is no one
answer (depends on s
—
> Erik Hatcher, Senior Solutions Architect
> http://www.lucidworks.com <http://www.lucidworks.com/>
>
>
>
>
> > On Mar 24, 2015, at 8:55 AM, Ian Rose wrote:
> >
> > Hi Erik -
> >
> > Sorry, I totally missed your reply. To the best of my knowledge,
us to a Solr
> "core".)
>
>
> -- Jack Krupansky
>
> On Tue, Mar 24, 2015 at 9:02 AM, Ian Rose wrote:
>
> > Hi all -
> >
> > I'm sure this topic has been covered before but I was unable to find any
> > clear references online or in the
First off thanks everyone for the very useful replies thus far.
Shawn - thanks for the list of items to check. #1 and #2 should be fine
for us and I'll check our ulimit for #3.
To add a bit of clarification, we are indeed using SolrCloud. Our current
setup is to create a new collection for each
But not much, so you can get pretty far with
> relatively little RAM.
> Our version of Solr is based on Apache Solr 4.4.0, but I expect/hope it
> did not get worse in newer releases.
>
> Just to give you some idea of what can at least be achieved - in the
> high-end of #rep
Wups - sorry folks, I send this prematurely. After typing this out I think
I have it figured out - although SPLITSHARD ignores maxShardsPerNode,
ADDREPLICA does not. So ADDREPLICA fails because I already have too many
shards on a single node.
On Wed, Apr 8, 2015 at 11:18 PM, Ian Rose wrote
h is to hand-edit
> clusterstate.json, which is very ill advised. If you absolutely must,
> it's best to stop all your Solr nodes, backup the current clusterstate
> in ZK, modify it, and then start your nodes.
>
> On Wed, Apr 8, 2015 at 10:21 PM, Ian Rose wrote:
> > I
On my local machine I have the following test setup:
* 2 "nodes" (JVMs)
* 1 collection named "testdrive", that was originally created with
numShards=1 and maxShardsPerNode=1.
* After a series of SPLITSHARD commands, I now have 4 shards, as follows:
testdrive_shard1_0_0_replica1 (L) Active 115
tes
I previously created several collections with maxShardsPerNode=1 but I
would now like to change that (to "unlimited" if that is an option). Is
changing this value possible?
Cheers,
- Ian
Hi all -
I've just upgraded my dev install of Solr (cloud) from 4.10 to 5.0. Our
client is written in Go, for which I am not aware of a client, so we wrote
our own. One tricky bit for this was the routing logic; if a document has
routing prefix X and belong to collection Y, we need to know which
d more information
> here,
>
> https://issues.apache.org/jira/browse/SOLR-5473
> https://issues.apache.org/jira/browse/SOLR-5474
>
> Regards
> Hrishikesh
>
>
> On Tue, Apr 14, 2015 at 8:49 AM, Ian Rose wrote:
>
> > Hi all -
> >
> > I've just up
Is it possible to run DELETESHARD commands in async mode? Google searches
seem to indicate yes, but not definitively.
My local experience indicates otherwise. If I start with an async
SPLITSHARD like so:
http://localhost:8983/solr/admin/collections?action=splitshard&collection=2Gp&shard=shard1_
Done!
https://issues.apache.org/jira/browse/SOLR-7481
On Tue, Apr 28, 2015 at 11:09 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> This is a bug. Can you please open a Jira issue?
>
> On Tue, Apr 28, 2015 at 8:35 PM, Ian Rose wrote:
>
> > Is it pos
api7
>
> This doesn't mentioned support for async DELETESHARD calls.
>
> On Tue, Apr 28, 2015 at 8:05 AM, Ian Rose wrote:
>
> > Is it possible to run DELETESHARD commands in async mode? Google
> searches
> > seem to indicate yes, but not definitively.
> >
&
that page is the official reference guide and might need fixing if
> it's out of sync.
>
>
> On Tue, Apr 28, 2015 at 10:47 AM, Ian Rose wrote:
>
> > Hi Anshum,
> >
> > FWIW I find that page is not entirely accurate with regard to async
> > params. For exa
Howdy all -
The short version is: We are not seeing Solr Cloud performance scale (event
close to) linearly as we add nodes. Can anyone suggest good diagnostics for
finding scaling bottlenecks? Are there known 'gotchas' that make Solr Cloud
fail to scale?
In detail:
We have used Solr (in non-Clou
ries, right? I am not issuing any
queries, only writes (document inserts). In the case of writes, increasing
the number of shards should increase my throughput (in ops/sec) more or
less linearly, right?
On Thu, Oct 30, 2014 at 4:50 PM, Shawn Heisey wrote:
> On 10/30/2014 2:23 PM, Ian
depending upon who hard the document-generator is
> working.
>
> Also, make sure that you send batches of documents as Shawn
> suggests, I use 1,000 as a starting point.
>
> Best,
> Erick
>
> On Thu, Oct 30, 2014 at 2:10 PM, Shawn Heisey wrote:
> > On 10/30/2014 2:56
nch of replicas to a single shard. When
> the number of docs on each shard grows large enough that you
> no longer get good query performance, _then_ you shard. And
> take the query hit.
>
> If we're talking about inserts, then see above. I suspect your problem is
> that you
nt, right? Performance will be terrible if you issue commits
> > >> after every doc, that's totally an anti-pattern. Doubly so for
> > >> optimizes Since you showed us your solrconfig autocommit
> > >> settings I'm assuming not but want to be sure
If I add some documents to a SolrCloud shard in a collection "alpha", I can
post them to "/solr/alpha/update". However I notice that you can also post
them using the shard name, e.g. "/solr/alpha_shard4_replica1/update" - in
fact this is what Solr seems to do internally (like if you send documents
hat you can hit any
> SolrCloud node (even the ones not hosting this collection) and it will
> still work. So for a non Java client, a load balancer can be setup in front
> of the entire cluster and things will just work.
>
> On Wed, Nov 5, 2014 at 8:50 PM, Ian Rose wrote:
>
>
> rack enough _more_ clients to drive Solr at the same level. In this
> case I'll go out on a limb and predict near 2x throughput increases.
>
> One additional note, though. When you add _replicas_ to shards expect
> to see a drop in throughput that may be quite significant, 20-40%
&g
Howdy -
What is the current best practice for migrating shards to another machine?
I have heard suggestions that it is "add replica on new machine, wait for
it to catch up, delete original replica on old machine". But I wanted to
check to make sure...
And if that is the best method, two follow-u
"recovering" in
> >> clusterstate.json and wait until it's "active."
> >>
> >> 2. I believe this to be the case, but I'll wait for someone else to
> chime in
> >> who knows better. Also, I wonder if there's a difference betwe
Howdy -
We are using composite IDs of the form !. This ensures that
all events for a user are stored in the same shard.
I'm assuming from the description of how composite ID routing works, that
if you split a shard the "split point" of the hash range for that shard is
chosen to maintain the inva
I don't think zookeeper has a REST api. You'll need to use a Zookeeper
client library in your language (or roll one yourself).
On Wed, Nov 19, 2014 at 9:48 AM, nabil Kouici wrote:
> Hi All,
>
> I'm connecting to solr using REST API (No library like SolJ). As my solr
> configuration is in cloud
30 matches
Mail list logo