Hi ,
commit is taking more than 1300 ms . what should i check on server.
below is my configuration .
${solr.autoCommit.maxTime:15000} <
openSearcher>false
${solr.autoSoftCommit.maxTime:-1}
LOL! Thanks.
Oh yeah. I've done my time in a support role! Nothing more maddening than
a user who won't share the facts!
On Mon, Aug 8, 2016 at 9:24 PM, Erick Erickson
wrote:
> BTW, kudos for including the commands in your first problem statement
> even though, I'm sure, you wondered if it w
Hi All,
We are using SOLR 6.1 and i wanted to know which is better to use -
deleteById or deleteByQuery?
We have a program which deletes 10 documents every 5 minutes from the
SOLR and we do it in a batch of 200 to delete those documents. For that we
now use deleteById(List ids, 1) to dele
BTW, kudos for including the commands in your first problem statement
even though, I'm sure, you wondered if it was necessary. Saved at least
three back-and-forths to get to the root of the problem (little pun there)...
Erick
On Mon, Aug 8, 2016 at 3:11 PM, John Bickerstaff
wrote:
> OMG!
>
> Tha
OMG!
Thanks. Too long staring at the same string.
On Mon, Aug 8, 2016 at 3:49 PM, Kevin Risden
wrote:
> Just a quick guess: do you have a period (.) in your zk connection string
> chroot when you meant an underscore (_)?
>
> When you do the ls you use /solr6_1/configs, but you have /solr6.1 in
Just a quick guess: do you have a period (.) in your zk connection string
chroot when you meant an underscore (_)?
When you do the ls you use /solr6_1/configs, but you have /solr6.1 in your
zk connection string chroot.
Kevin Risden
On Mon, Aug 8, 2016 at 4:44 PM, John Bickerstaff
wrote:
> Firs
First, the caveat: I understand this is technically a zookeeper error. It
is an error that occurs when trying to deal with Solr however, so I'm
hoping someone on the list may have some insight. Also, I'm getting the
error via the zkcli.sh tool that comes with Solr...
I have created a collection
Some more info that might be helpful. If I can trust my logging this is
what's happening (search with rows=3 on collection with 2 shards):
1) delegating collector finish() method places custom data on request object
for _shard 1_
2) doc transformer transform() method is called for 3 requested docs
This is great but where can I do this change in SOLR 6 as I have implemented
CDCR.
Ritesh K
Infrastructure Sr. Engineer – Jericho Team
Sales & Marketing Digital Services
t +91-7799936921 v-kur...@microsoft.com
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Se
That makes sense. I would prefer to just merge the custom analytics, but
sending that much info via the solr response seems very slow. However I
still can't figure out how to access the custom analytics in a doc
transformer. That would provide the fastest response but I would have to
merge the Ids
On Mon, Aug 8, 2016 at 5:10 AM, Callum Lamb wrote:
> We have a cronjob that runs every week at a quiet time to run the
> optimizecommand on our Solr collections. Even when it's quiet it's still an
> extremely heavy operation.
>
> One of the things I keep seeing on stackoverflow is that optimizing
Did you change the merge settings and max segments? If you did, try going back
to the defaults.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Aug 8, 2016, at 8:56 AM, Erick Erickson wrote:
>
> Callum:
>
> re: the optimize failing: Perhaps it's
Yeah, Shawn, but you, like, know something about Tomcat and
actually provide useful advice ;)
On Mon, Aug 8, 2016 at 6:44 AM, Shawn Heisey wrote:
> On 8/7/2016 6:53 PM, Tim Chen wrote:
>> Exception in thread "http-bio-8983-exec-6571" java.lang.OutOfMemoryError:
>> unable to create new native thr
If at all possible, denormalize the data
But you can also use Solr's Join capability here, see:
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-JoinQueryParser
Best,
Erick
On Mon, Aug 8, 2016 at 8:47 AM, Pithon Philippe wrote:
> Hello,
> I have two documents type
Callum:
re: the optimize failing: Perhaps it's just timing out?
That is, the command succeeds fine (which you
are reporting), but it's taking long enough that the
request times out so the client you're using reports an error.
Just a guess...
My personal feeling is that (of course), you need t
Hello,
I have two documents type :
- tickets (type_s:"ticket", customerid_i:10)
- customers (type_s:customer,customerid_i:10,name_s:"FISHER" )
I want a query to find all tickets for name customer FISHER
In document ticket (type_s:"ticket") , I have id customer but not name
customer...
Any ideas ?
Yeah I figured that was too many deleteddocs. It could just be that our max
segments is set too high though.
The reason I asked is because our optimize requests have started failing.
Or at least,they are appearing to fail because the optimize request returns
a non 200. The optimize seems to go ahe
On 8/2/2016 7:50 AM, Bernd Fehling wrote:
> Only assumption so far, DIH is sending the records as "update" (and
> not pure "add") to the indexer which will generate delete files during
> merge. If the number of segments is high it will take quite long to
> merge and check all records of all segment
On 8/8/2016 3:10 AM, Callum Lamb wrote:
> How true is this claim? Is optimizing still a good idea for the
> general case?
For the general case, optimizing is not recommended. If there are a
very large number of deleted documents, which does describe your
situation, then there is definitely a bene
The mergeIds() method should be true if you are handling the merge of the
documents from the shards. If you are merging custom analytics from an
AnalyticsQuery only then you would return false. In your case, since you
are de-duping documents you would need to return true.
There are two methods in
On 8/7/2016 6:53 PM, Tim Chen wrote:
> Exception in thread "http-bio-8983-exec-6571" java.lang.OutOfMemoryError:
> unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:714)
> at
> java.util.concurrent.Thread
Hello
I have 14 cores, with a couple of them using Shards and now I am looking at
the master/Slave fallback solution. Can anyone please point me in the right
direction to get started?
Thanks
Kalpana
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-5-4-1-Master-Slave-Re
Hi Pablo, will try this.
Sorry for the late reply but I didn't get any notification of this answer!
Thanks,
Andrea
--
View this message in context:
http://lucene.472066.n3.nabble.com/Group-and-sum-in-SOLR-5-3-tp4289556p4290750.html
Sent from the Solr - User mailing list archive at Nabble.com.
We have a cronjob that runs every week at a quiet time to run the
optimizecommand on our Solr collections. Even when it's quiet it's still an
extremely heavy operation.
One of the things I keep seeing on stackoverflow is that optimizing is now
essentially deprecated and lucene (We're on Solr 5.5.2
24 matches
Mail list logo