The cluster state problem reported above is not an issue - it was caused by
our own code.
Speaking about the update log - i have noticed a strange behavior concerning
the replay. The replay is *supposed* to be done for a predefined number of
log entries, but actually it is always done for the whole
the hard commit is set to about 20 minutes, while ram buffer is 256Mb.
We will add more frequent hard commits without refreshing the searcher, that
for the tip.
from what I understood from the code, for each 'add' command there is a test
for a 'delete by query'. if there is an older dbq, it's run
a small change: it's not an endless loop, but a painfully slow processing
which includes running a delete query and then insertion. Each document from
the tlog takes tens of seconds to process (more than 100 times slower than
during normal insertion process)
--
View this message in context:
htt
Consider the following:
Solr 4.3, 2 node test cluster, each is a leader.
During (or immediately after, before hard commit) indexing I shutdown one of
them and restart later.
The tlog is about 200Mb size.
I see recurring 'Reordered DBQs detected' in the log, seems like an endless
loop because THE VE
I'm going to use the implicitdocrouter for sharding. Our sharding is not
based on a hashing mechanism.
As far as I understand, if I don't provide the numShards parameter, implicit
router is used. My question is:
Using the implicit routing, how can I assign a new core to a new shard,
instead of join
In pre-cloud version of SOLR it was necessary to pass shards and shards.qt
parameters in order to make /suggest handler work standalone.
How should it work in SolrCloud?
SpellCheckComponent skips the distributed stage of processing and thus I get
suggestions only when I force distrib=false mode.
Se
As long as Core Admin is accessible via HTTP and allows to manipulate Solr
cores, it should be secured, regardless of configured path. The difference
between securing Admin vs. securing other handlers is that other handlers
are accessed by a specific application server(s), and therefore may be
easi
Hi,
There are a lot of posts which talk about hardening the /admin handler with
user credentials etc.
>From the other hand, replication handler wouldn't work if /admin/cores is
also hardened.
Considering this fact, how could I allow secure external access to the admin
interface AND allow proper clu
You should use language detection processor factory, like below:
content
language
en
*true
content,fullname*
true
en,fr,de,es,ru,it
0.7
Once you have defined fields like content_en, content_fr etc., they will b
Hi,
I would like to store the document content into a single special field (non
indexed, stored only), and create several indexed copy fields (different
analysis applied).
During highlighting, the analysis definitions of the stored field are used,
so that improper or no highlighting is done.
Is the
According to what I see, the basic LRUCache implementation has an empty
close(), that is why it still works for the already closed searcher.
According to the base interface, the close() is there for "freeing non
memory resources".
Can someone explain this counter intuitive behavior?
Why is it allo
Hi,
I've written a unit test for a custom search component, which naturally
extends the SolrTestCaseJ4.
beforeClass() has initCore(), assertU(adoc()) and assertU(commit()) inside.
The test creates a SolrQueryRequest via req() and runs h.query(request). In
other words, nothing special.
I see a rathe
Hi,
Suppose the document stored in the index has fields A and B.
What would be the best way to alter the value of B after the result set is
available?
The modified value of B is influenced by the value of A and also by some
custom logic based on (custom) SolrCache.
Can it be a custom function query
Hi,
I need a small clarification on how forwarding to the non-(/select) handler
works.
When I define a distinct handler /terms with TermsComponent inside (or
/suggest with the SpellCheckComponent defined for suggester), the
distributed call never works. The reason is simple - the request always get
Zookeeper manages not only the cluster state, but also the common
configuration files.
My question is, what are the exact rules of precedence? That is, when SOLR
node will decide to download new configuration files?
Will configuration files be updated from ZooKeeper every time the core is
refreshe
Correction:
shard.qt is sufficient, but you cannot define only spellcheck component in
requestHandler as it doesn't create shard requests, seems like 'query'
handler is a must if you want distributed processing.
--
View this message in context:
http://lucene.472066.n3.nabble.com/ShardHandler-di
The only way I succeeded to forward to the right request handler was:
1. shard.qt = /suggest (shards.qt=%2Fsuggest actually) in query
2.handleSelect='true' in solrconfig
3. NO /select handler in solrconfig
Only this combination forces 2 things - shard handler forwards qt=/suggest
parameter to othe
I tried to define a suggest component as appears in Wiki.
I also defined a specific /suggest request handler.
This doesn't work in SolrCloud setup, as the query is distributed to the
default /select handler instead.
Specifically, shard handler gets default urls and other cores forward to
/select.
s
setup:
1 node, 4 cores, 2 shards.
15 documents indexed.
problem:
init stage times out.
probable cause:
According to the init flow, cores are initialized one by one synchronously.
Actually, the main thread waits
ShardLeaderElectionContext.waitForReplicasToComeUp until retry threshold,
while replic
Using SolrCloud release with following configuration:
string
elevate.xml
explicit
text
elevator
Running the query
http://localhost:8080/solr/collection1/elevate?q=evelatedtext
constantly getting the following exception:
SEVERE: nu
The situation can be replayed on solr 4 (solrcloud):
1. Define the warmup query
2. Add spell checker configuration to the /select search handler
3. Set spellcheck.collation = true
The server will stuck in init phase due to deadlock.
Is there a bug open for this?
Actually you cannot get collated sp
It is actually connected to this:
https://gist.github.com/2880527
Once you have collation = true + warmup, the init is stuck on wait
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrCloud-loop-in-recovery-mode-tp4015330p4015593.html
Sent from the Solr - User mailing list
After a little bit of investigation, it's about the searcher warmup that
doesn't happen.
I see the main thread waiting for the searcher. The warmup query handler is
stuck in another thread on the very same lock in getSearcher(), and no
notify() is called.
If I set the useColdSearcher = true, this o
I only started learning the new features, so chances are it's about some
misconfiguration.
I removed the collection2 from the setup and indexed some files.
Now there is another pattern that stucks the init, and it's about the
overseer polling the queue:
Oct 24, 2012 2:18:52 PM org.apache.solr.core
In other words, I would have to apply a mixture of modes: SolrCloud for each
location + old-style replication for mirroring.
BTW, I've seen a notion of 'role' in node cloud state. Is it in use or is
there for future extensions? Having 'indexer' and 'searcher' roles backed by
the infrastructure wou
Hi,
As far as I understand, SolrCloud eliminates the master-slave specifics, and
automates both update and search seamlessly.
What should I take into account configuring SolrCloud for a large customer
with multiple physical locations?
I mean, for older Solr I would define master 'close to the data'
26 matches
Mail list logo