We have duplicate records in two shards and want to delete one set of duplicate
records from one shard.
curl --proxy ""
'http://host.abc.com:8983/solr/collection1/update?shards=localhost:8983/solr/collection1&commit=true'
-H "Content-Type: text/xml" --data-binary
'.'
This delete comma
Thank you for replying.
We added new shard to same cluster where some shards are showing Solr version
4.10.0 and this new shard is showing Solr version 4.8.0. All shards source Solr
software from same location and use same start up script. I am surprised how
older shards are still running Solr
Hi,
We upgraded our cluster to Solr 4.10.0 for couple days and again reverted back
to 4.8.0. However the dashboard still shows Solr 4.10.0. Do you know why?
* solr-spec 4.10.0
* solr-impl 4.10.0 1620776
* lucene-spec 4.10.0
* lucene-impl 4.10.0 1620776
We recently added n
pauses) and exceed the timeout. So upping
the ZK timeout has helped some people avoid this...
FWIW,
Erick
On Wed, Jan 28, 2015 at 7:11 AM, Joshi, Shital wrote:
> We're using Solr 4.8.0
>
>
> -Original Message-
> From: Erick Erickson [mailto:erickerick...@gmail.com]
&g
veral
are very recent.
Try searching the JIRA for Solr for details.
Best,
Erick
On Tue, Jan 27, 2015 at 1:51 PM, Joshi, Shital wrote:
> Hello,
>
> We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes and three
> zookeeper instances. We have noticed that when a leader n
Hello,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes and three
zookeeper instances. We have noticed that when a leader node goes down the
replica never takes over as a leader, cloud becomes unusable and we have to
bounce entire cloud for replica to assume leader role. Is this
We wrote a script which queries each Solr instance in cloud
(http://$host/solr/replication?command=details) and subtracts the
‘replicableVersion’ number from the ‘indexVersion’ number, converts to minutes,
and alerts if the minutes exceed 20. We get alerted many times a day. The soft
commit set
Hi,
We're updating Solr cloud from a java process using UpdateRequest API.
UpdateRequest req = new UpdateRequest();
req.setResponseParser(new XMLResponseParser());
req.setParam("_shard_", shard);
req.add(docs);
We see too many searcher open errors in log and wondering if frequent updates
from
your autocommit settings (probably
soft commit) until you no longer see that error message
and see if the problem goes away. If it doesn't, let us know.
Best,
Erick
On Thu, Aug 28, 2014 at 9:39 AM, Joshi, Shital wrote:
> Hi Shawn,
>
> Thanks for your reply.
>
> We did some tests
Hi Shawn,
Thanks for your reply.
We did some tests enabling shards.info=true and confirmed that there is not
duplicate copy of our index.
We have one replica but many times we see three versions on Admin GUI/Overview
tab. All three has different versions and gen. Is that a problem?
Master (
Hi,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes. We have three
collections. We recently upgraded from 4.4.0 from 4.8. We have ~850 mil
documents.
We are facing an issue where refreshing a Solr query may give different results
(number of documents returned). This issue is se
Yes that was the problem. Switching back works now. Thanks!
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Tuesday, June 10, 2014 4:48 PM
To: solr-user@lucene.apache.org
Subject: Re: Format version is not supported error
On 6/10/2014 1:17 PM, Joshi, Shital wrote
Hi,
We upgraded from Solr version 4.4 to 4.8. In doing so we also upgraded from JDK
1.6 to 1.7. After few days of testing, we decided to move back to 4.4. We get
following error in all nodes and our cloud is not usable. How do we fix it?
Format version is not supported (resource:
MMapIndexInpu
Hi,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes. On some of the
boxes we have about 5 million deleted docs and we have never run optimization
since beginning. Does number of deleted docs have anything to do with
performance of query? Should we consider optimization at all i
Hi,
What are ways to prevent someone executing random delete commands against Solr?
Like:
curl http://solr.com:8983/solr/core/update?commit=true -H "Content-Type:
text/xml" --data-binary '*:*'
I understand we can do IP based access (change /etc/jetty.xml). Is there
anything Solr provides out
t your
warming query configuration?
Joel Bernstein
Search Engineer at Heliosearch
On Wed, May 7, 2014 at 4:25 PM, Joshi, Shital wrote:
> Hi,
>
> How many auto warming queries are supported per collection in Solr4.4 and
> higher? We see one out of three queries in log when new searcher is created.
>
> Thanks!
>
>
>
>
s opened, i.e. when
a commit (hard when openSeracher=true or soft) happens.
Let's see your configuration too where you think you're setting up the
queries, maybe you've got an error there.
Best,
Erick
On Mon, May 12, 2014 at 8:27 AM, Joshi, Shital wrote:
> Hi,
>
> How man
Hi,
How many auto warming queries are supported per collection in Solr4.4 and
higher? We see one out of three queries in log when new searcher is created.
Thanks!
We added an id (searcher3) in each searcher but it never
gets printed in log file. Does Solr internally massages the searcher queries?
_
From: Joshi, Shital [Tech]
Sent: Monday, May 12, 2014 11:27 AM
To: 'solr-user@lucene.apache.org
Hi,
How many auto warming queries are supported per collection in Solr4.4 and
higher? We see one out of three queries in log when new searcher is created.
Shouldn't it print all searcher queries?
Thanks!
ngs
will trip a commit as well.
For that matter ,what are all our commit settings in solrconfig.xml,
both hard and soft?
Best,
Erick
On Tue, Apr 8, 2014 at 10:28 AM, Joshi, Shital wrote:
> Hi,
>
> We have 10 node Solr Cloud (5 shards, 2 replicas) with 30 GB JVM on 60GB
> machine and
Hi,
We have 10 node Solr Cloud (5 shards, 2 replicas) with 30 GB JVM on 60GB
machine and 40 GB of index.
We're constantly noticing that Solr queries take longer time while update (with
commit=false setting) is in progress. The query which usually takes .5 seconds,
take up to 2 minutes while up
Thank you!
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Friday, March 28, 2014 3:14 PM
To: solr-user@lucene.apache.org
Subject: Re: commit=false in Solr update URL
On 3/28/2014 1:02 PM, Joshi, Shital wrote:
> You mean default for openSearcher is false right?
2014 12:48 PM
To: solr-user@lucene.apache.org
Subject: Re: commit=false in Solr update URL
On 3/28/2014 10:22 AM, Joshi, Shital wrote:
> What happens when we use commit=false in Solr update URL?
> http://$solr_url/solr/$solr_core/update/csv?commit=false&separator=|&trim=true&skipLi
Hi,
What happens when we use commit=false in Solr update URL?
http://$solr_url/solr/$solr_core/update/csv?commit=false&separator=|&trim=true&skipLines=2&_shard_=$shardid
1. Does it invalidate all caches? We really need to know this.
2. Nothing happens to existing searcher, correct?
3
Hi,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes.
When GUI fires query to solr url:
1. The node which receives query, Does it send query to each shard in
parallel or in sequence?
2. From log file, how do we find total time taken to integrate results
from different
I see lots of messages like this in solr4 logs. What is it for?
INFO - 2014-03-14 16:42:47.098; org.apache.solr.core.SolrCore; [collection1]
webapp=/solr path=/select
params={NOW=1394829767088&shard.url=us123.abc.com:8983/solr/collection1/|us234.abc.com:8983/solr/collection1/&fl=q_idn_s,score&
www.appinions.com/>
On Thu, Feb 27, 2014 at 3:09 PM, Joshi, Shital wrote:
> Hi Michael,
>
> If page cache is the issue, what is the solution?
>
> Thanks!
>
> -Original Message-
> From: Michael Della Bitta [mailto:michael.della.bi...@appinions.com]
> Sent: M
tter.com/Appinions> | g+:
plus.google.com/appinions<https://plus.google.com/u/0/b/112002776285509593336/112002776285509593336/posts>
w: appinions.com <http://www.appinions.com/>
On Mon, Feb 24, 2014 at 5:35 PM, Joshi, Shital wrote:
> Thanks.
>
> We found some evidence that this
New York, NY 10017
t: @appinions <https://twitter.com/Appinions> | g+:
plus.google.com/appinions<https://plus.google.com/u/0/b/112002776285509593336/112002776285509593336/posts>
w: appinions.com <http://www.appinions.com/>
On Fri, Feb 21, 2014 at 5:20 PM, Joshi, Shital wr
options for logging GC and
seeing if you can correlate your slow responses to times when your JVM is
garbage collecting.
Hope that helps,
On Feb 20, 2014 4:52 PM, "Joshi, Shital" wrote:
> Hi!
>
> I have few other questions regarding Solr4 performance issue we're facing.
&
Hello,
We have following hard commit setting in solrconfig.xml.
${solr.ulog.dir:}
${solr.autoCommit.maxTime:60}
10
true
Shouldn't we see DirectUpdateHandler2; start commit and DirectUpdateHandler2;
end_commit_flush message in our lo
t: Re: Solr4 performance
On 2/18/2014 2:14 PM, Joshi, Shital wrote:
> Thanks much for all suggestions. We're looking into reducing allocated heap
> size of Solr4 JVM.
>
> We're using NRTCachingDirectoryFactory. Does it use MMapDirectory internally?
> Can someone please
Hi,
Thanks much for all suggestions. We're looking into reducing allocated heap
size of Solr4 JVM.
We're using NRTCachingDirectoryFactory. Does it use MMapDirectory internally?
Can someone please confirm?
Would optimization help with performance? We did that in QA (took about 13
hours for 70
Does Solr4 load entire index in Memory mapped file? What is the eviction policy
of this memory mapped file? Can we control it?
_
From: Joshi, Shital [Tech]
Sent: Wednesday, February 05, 2014 12:00 PM
To: 'solr-user@lucene.apache.org'
Subj
Hi,
We have SolrCloud cluster (5 shards and 2 replicas) on 10 dynamic compute boxes
(cloud). We're using local disk (/local/data) to store solr index files. All
hosts have 60GB ram and Solr4 JVM are running with max 30GB heap size. So far
we have 470 million documents. We are using custom shard
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes with 500
million documents. We're using custom sharding where we direct all documents
with specific business date to specific shard.
With Solr 3.6 we used this command to optimize documents on master and then let
replication take
:15 PM
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud
On 8/9/2013 11:15 AM, Joshi, Shital wrote:
> Same thing happen. It only works with N/2 + 1 zookeeper instances up.
Got it.
An update came in on the issue that I filed. This behavior that you're
s
aster/slave setup? Seeing the
queries would help diagnose this. Also, did you try to copy/paste
the configuration from your Solr3 to Solr4? I'd start with the
Solr4 and copy/paste only the parts needed from your SOlr3 setup.
Best
Erick
On Mon, Aug 12, 2013 at 11:38 AM, Joshi, Shital w
Hi,
We have SolrCloud (4.4.0) cluster (5 shards and 2 replicas) on 10 boxes with
about 450 mil documents (~90 mil per shard). We're loading 1000 or less
documents in CSV format every few minutes. In Solr3, with 300 mil documents, it
used to take 30 seconds to load 1000 documents while in Solr4,
Same thing happen. It only works with N/2 + 1 zookeeper instances up.
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Friday, August 09, 2013 11:22 AM
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud
On 8/9/2013 9:02 AM, Joshi
3 3:03 PM, Joshi, Shital wrote:
> We did quite a bit of testing and we think bug
> https://issues.apache.org/jira/browse/SOLR-4899 is not resolved in Solr 4.4
The commit for SOLR-4899 was made to branch_4x on June 10th.
lucene_solr_4_4 code branch was created from branch_4x on July 8th.
T
We did quite a bit of testing and we think bug
https://issues.apache.org/jira/browse/SOLR-4899 is not resolved in Solr 4.4
-Original Message-
From: Joshi, Shital [Tech]
Sent: Wednesday, August 07, 2013 2:48 PM
To: 'solr-user@lucene.apache.org'
Subject: RE: external zook
Subject: Re: external zookeeper with SolrCloud
You said earlier that you had 6 zookeeper instances, but the zkHost param
only shows 5 instances... is that correct?
On Tue, Aug 6, 2013 at 11:23 PM, Joshi, Shital wrote:
> Machines are definitely up. Solr4 node and zookeeper instance share
the zkHost param
only shows 5 instances... is that correct?
On Tue, Aug 6, 2013 at 11:23 PM, Joshi, Shital wrote:
> Machines are definitely up. Solr4 node and zookeeper instance share the
> machine. We're using -DzkHost=zk1,zk2,zk3,zk4,zk5 to let solr nodes know
> about th
that the upgrade to 4.4
was carried out on all machines?
Erick
On Tue, Aug 6, 2013 at 5:23 PM, Joshi, Shital wrote:
> Machines are definitely up. Solr4 node and zookeeper instance share the
> machine. We're using -DzkHost=zk1,zk2,zk3,zk4,zk5 to let solr nodes know
>
your Solr nodes at specific ZK
machines
that aren't up when you have this problem? I.e. -zkHost=zk1,zk2,zk3
Best
Erick
On Tue, Aug 6, 2013 at 4:56 PM, Joshi, Shital wrote:
> Hi,
>
> We have SolrCloud (4.4.0) cluster (5 shards and 2 replicas) on 10 boxes.
> We have 6
---Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Tuesday, June 11, 2013 10:42 AM
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud
On Jun 11, 2013, at 10:15 AM, "Joshi, Shital" wrote:
> Thanks Mark.
>
> Looks like
We have SolrCloud (4.3.0) cluster (5 shards and 2 replicas) on 10 boxes. We
have about 450 million documents. We're planning to upgrade to Solr 4.4.0. Do
We need to re-index already indexed documents?
Thanks!
Thanks for all answers. We decided to use VisualVM with multiple remote
connections.
-Original Message-
From: Utkarsh Sengar [mailto:utkarsh2...@gmail.com]
Sent: Friday, July 26, 2013 6:19 PM
To: solr-user@lucene.apache.org
Subject: Re: monitor jvm heap size for solrcloud
We have been
We have SolrCloud cluster (5 shards and 2 replicas) on 10 boxes. While running
stress tests, we want to monitor JVM heap size across 10 nodes. Is there a
utility which would connect to all nodes' jmx port and display all bean details
for the cloud?
Thanks!
ck
On Wed, Jul 24, 2013 at 6:50 PM, Dominique Bejean
wrote:
> With 6 zookeeper instances you need at least 4 instances running at the same
> time. How can you decide to stop 4 instances and have only 2 instances
> running ? Zookeeper can't work anymore in these conditions.
>
>
We have SolrCloud cluster (5 shards and 2 replicas) on 10 dynamic compute boxes
(cloud), where 5 machines (leaders) are in datacenter1 and replicas on
datacenter2. We have 6 zookeeper instances - 4 on datacenter1 and 2 on
datacenter2. The zookeeper instances are on same hosts as Solr nodes. We'
Hi,
We have Solr 3.6 set up with master and two slaves, each one with 70GB JVM. We
run into java.lang.OutOfMemoryError when we cross 250 million documents. Every
time this happens we purge documents, bring it below 200 million and bounce
both slaves. We have facets on 14 fields. We usually don
, 2013, at 3:13 PM, "Joshi, Shital" wrote:
> Thanks Mark.
>
> We use commit=true as part of the request to add documents. Something like
> this:
>
> echo "$data"| curl --proxy "" --silent
> "http://HOST:9983/solr/collection1/update/csv?commi
at add documents? If so, it might be
SOLR-4923 and you should try the commit in a request after adding the docs.
- Mark
On Jun 27, 2013, at 4:42 PM, "Joshi, Shital" wrote:
> Hi,
>
> We finally decided on using custom sharding (implicit document routing) for
> our pro
bject: Re: shardkey
On Fri, Jun 21, 2013 at 6:08 PM, Joshi, Shital wrote:
> But now Solr stores composite id in the document id
Correct, it's the document id itself that contains everything needed
for tje compositeId router to determine the hash.
> It would only use it to ca
ect: Re: shardkey
On Fri, Jun 21, 2013 at 6:08 PM, Joshi, Shital wrote:
> But now Solr stores composite id in the document id
Correct, it's the document id itself that contains everything needed
for tje compositeId router to determine the hash.
> It would only use it to calculate has
splitting it. I will restart the
cloud and see if it goes away.
Thanks!
-Original Message-
From: Shawn Heisey [mailto:s...@elyograg.org]
Sent: Friday, June 21, 2013 5:38 PM
To: solr-user@lucene.apache.org
Subject: Re: SPLITSHARD throws error
On 6/21/2013 3:06 PM, Joshi, Shital wrote:
>
>
trol that two shardKeys must go
to two different shards. We can only guarantee that docs with the same
shardKey will goto the same shard.
On Mon, Jun 17, 2013 at 9:47 PM, Joshi, Shital wrote:
> Thanks for the links. It was very useful.
>
> Is there a way to use implicit router WITH numSha
[mailto:s...@elyograg.org]
Sent: Friday, June 21, 2013 4:45 PM
To: solr-user@lucene.apache.org
Subject: Re: SPLITSHARD throws error
On 6/21/2013 2:26 PM, Joshi, Shital wrote:
> Hi,
>
> We have 5 shards with replication factor 2 (total 10 jvm instances). Our
> shards are named (shardid) s
Hi,
We have 5 shards with replication factor 2 (total 10 jvm instances). Our shards
are named (shardid) shard1,shard2,shard3,shar4 and shar5 and collection name is
collection1. When we execute this command:
curl --proxy ''
"http://$HOST_NAME:8983/solr/admin/collections?action=SPLITSHARD&collec
Hi,
We hard committed (/update/csv?commit=true) about 20,000 documents to
SolrCloud (5 shards, 1 replicas = 10 jvm instances). We have commented out both
autoCommit and autoSoftCommit settings from solrconfig.xml. What we noticed
that the transaction log size never goes down to 0. We thought o
hash using the uniqueKey defined in your
> schema.xml to route your documents to a dedicated shard.
>
> You can use select?q=xyz&shard.keys=uniquekey to focus your search to hit
> only the shard that has your shard.key
>
>
>
> Thanks,
>
> Rishi.
>
>
>
Hi,
We are using Solr 4.3.0 SolrCloud (5 shards, 10 replicas). I have couple
questions on shard key.
1. Looking at the admin GUI, how do I know which field is being used
for shard key.
2. What is the default shard key used?
3. How do I override the default shard key?
T
:05 PM
To: solr-user@lucene.apache.org
Subject: Re: external zookeeper with SolrCloud
This might be https://issues.apache.org/jira/browse/SOLR-4899
- Mark
On Jun 10, 2013, at 5:59 PM, "Joshi, Shital" wrote:
> Hi,
>
>
>
> We're setting up 5 shard SolrCloud wit
Hi,
We're setting up 5 shard SolrCloud with external zoo keeper. When we bring up
Solr nodes while the zookeeper instance is not up and running, we see this
error in Solr logs.
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
67 matches
Mail list logo