ftware.
So to ask a concrete question, is it possible to not use zk for config
distribution, i.e. keep the config local to each shard?
Mike Schultz
--
View this message in context:
http://lucene.472066.n3.nabble.com/Rolling-Deploys-and-SolrCloud-tp4026212.html
Sent from the Solr - User mai
Ok, that makes sense and it's probably workable, but, it's still more awkward
than having code and configuration deployed together to individual machines.
For example, for a deploy of new software/config we need to 1) first upload
config to zK. then 2) deploy new software to the nodes.
What ab
Can someone explain the logic of not sending the qt parameter down to the
shards?
I see from here that qt is handled as a special case for ResultGrouping:
http://lucidworks.lucidimagination.com/display/solr/Result+Grouping
where there is a special shard.qt parameter.
in 3.x solrconfig.xml support
It depends on what kind of behavior you're looking for.
if for your queries the order of the 6 integer values doesn't matter you
could do:
then you could query with ORed or ANDed integer values over that field.
If the order matters but you always query on the "set" of 6 values, then you
turn y
I have a 4.x cluster that is 10 wide and 4 deep. One of the nodes of a shard
went down. I provisioned a replacement node and introduced into the
cluster, but it ended up on a random shard, not the shard with the downed
node.
Is there a maintenance step I need to perform before introducing a node
Yes. Just today actually. I had some unit test based on
AbstractSolrTestCase which worked in 4.0 but in 4.1 they would fail
intermittently with that error message. The key to this behavior is found
by looking at the code in the lucene class: TestRuleSetupAndRestoreClassEnv.
I don't understand i
Just to clarify, I want to be able to replace the down node with a host with
a different name. If I were repairing that particular machine and replacing
it, there would be no problem. But I don't have control over the name of my
replacement machine.
--
View this message in context:
http://lu
Solr 3x had a master/slave architecture which meant that indexing did not
happen in the same process as querying, in fact normally not even on the
same machine. The querier only needed to copy down snapshots of the new
index files and commit them. Great isolation for maximum query performance
and
We have very long schema files for each of our language dependent query
shards. One thing that is doubling the configuration length of our main
text processing field definition is that we have to repeat the exact same
filter chain for query time version EXCEPT with a queryMode=true parameter.
Is
I should have explained that the queryMode parameter is for our own custom
filter. So the result is that we have 8 filters in our field definition.
All the filter parameters (30 or so) of the query time and index time are
identical EXCEPT for our one custom filter which needs to know if it's in
q
I have an index with field collapsing defined like this:
SomeField
true
As far as I can tell, using field collapsing prevents the use of the
queryResultCache from being checked. It's important for our application to
have both. There are threads on incorporating custom hit collectors which
seems like it could be a way to implement the simplified collapsing I need
(just
Can you include the entire text for only the titolo field?
1.0 = tf(termFreq(titolo:trent)=1) means the index contains one hit for
'trent' for that field, that doc.
Mike
--
View this message in context:
http://lucene.472066.n3.nabble.com/Understanding-SOLR-search-results-tp4003480p4003540.h
You can use CustomScoreQuery to combine a scalar field value (e.g. like the
amount of the paid placement) together with the textual relevancy. You can
combine things anyway you want, e.g.
finalScore = textualScore + 1000.0 * scalarValue.
Or whatever makes sense. It sounds like you want some ki
I've looked through documentation and postings and expect that a single
filter cache entry should be approx MaxDoc/8 bytes.
Our frequently updated index (replication every 3 minutes) has maxdoc ~= 23
Million.
So I'm figuring 3MB per entry. With CacheSize=512 I expect something like
1.5GB of RAM,
Does anyone have a clear understanding of how group.caching achieves it's
performance improvements memory wise? Percent means percent of maxDoc so
it's a function of that, but is it a function of that *per* item in the
cache (like filterCache) or altogether? The speed improvement looks pretty
dra
16 matches
Mail list logo