This is almost the same exact setup I was using in solr 3.6 not sure why it's
not working. Here is my setup.
textSpell
default
spell
solr.DirectSolrSpellChecker
internal
0.7
2
1
5
4
0.01
I am using spellcheck=true when i post the search. ex.
solr/productindex/productQuery?q=fuacet&spellcheck=true
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spellchecker-not-working-for-Solr-4-1-tp4055450p4056131.html
Sent from the Solr - User mailing list archive at Nabbl
When I set distrib=false the spellchecker works perfectly. So I take it
spellchecker doesn't work in solr 4.1 in cloud mode. Does anybody know if it
works in 4.2.1?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spellchecker-not-working-for-Solr-4-1-tp4055450p4056768.html
S
Thank you for the response
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spellchecker-not-working-for-Solr-4-1-tp4055450p4056776.html
Sent from the Solr - User mailing list archive at Nabble.com.
I would like to use the Query Elevation Component. As I understand it only
elevates based on term. I would also like it to consider the list of fq
parameters. Well really just one fq parameter. ex (fq=siteid:4) since I used
the same solr index for many sites. Is something like this available
alread
I want to elevate certain documents differently depending a a certain fq
parameter in the request. I've read of somebody coding solr to do this but
no code was shared. Where would I start looking to implement this feature
myself?
--
View this message in context:
http://lucene.472066.n3.nabble.c
Everytime I try to do a reload using the collections API my entire cloud goes
down and I cannot search it. The solrconfig.xml and schema.xml are good
because when I just restart tomcat everything works fine.
Here is the output of the collections api reload command:
59155087 [Overseer-89776537554
Is it possible that this has something do do with it?
59157032 [Thread-2] INFO org.apache.solr.cloud.Overseer – Update state
numShards=null message={
numShards=null
--
View this message in context:
http://lucene.472066.n3.nabble.com/Collections-API-Reload-killing-my-cloud-tp4067141p4067151.
I have not implemented it yet. And I forget the exact webpage I found. But
there was a person on that page discussing the same problem and said it was
easy to implement a solution for it but he did not share his solution. If
you figure it out let me know.
--
View this message in context:
http:/
The only custom code I have are 3 custom transformers for my DIH.
Here is the code.
package org.build.com.solr;
/**
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional informati
It looks like any changes to configuration files need to be partnered with a
rolling restart of the cloud:
http://wiki.apache.org/solr/SolrCloud#Example_C:_Two_shard_cluster_with_shard_replicas_and_zookeeper_ensemble
ZooKeeper
Multiple Zookeeper servers running together for fault tolerance and h
When I try to start a solr server in my solr cloud I am receiving the error:
SEVERE: null:org.apache.solr.common.SolrException:
org.apache.zookeeper.server.quorum.QuorumPeerConfig$ConfigException: Error
processing /srv/solr//zoo.cfg
But I don't understand why I would need zoo.cfg in solr/home whe
This is what I get from the leader overseer log:
2013-01-02 18:04:24,663 - INFO [ProcessThread:-1:PrepRequestProcessor@419]
- Got user-level KeeperException when processing sessionid:0x23bfe1d4c280001
type:create cxid:0x58 zxid:0xfffe txntype:unknown reqpath:n/a
Error Path:/overseer E
Yes that is exactly what I was hoping for. I can live with just adding nodes
manually for now. Would be nice if this feature was included in 4.1 though
as I will be waiting for the 4.1 release to make the jump to SolrCloud.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Sol
I am having issues any time I add documents or delete documents. The issue is
that the log is reporting that the commit is happening but when I search
after the commit I get no change in the result set. It's only after I
manually commit again that I can see the new results.
For example I have a So
Every time I stop my SolrCloud (3 shards, 1 replica each, total 6 servers)
and then restart it I get the following error:
SEVERE: Error getting leader from zk
org.apache.solr.common.SolrException: Could not get leader props
at
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController
I have a SolrCloud as seen here: http://d.pr/i/ya86
When I stop solr-shard-1 solr-shard-4 should become the new leader. Instead
it does not. Here is the output from the logs.
INFO: A cluster state change has occurred - updating...
Jan 07, 2013 6:11:54 PM org.apache.solr.cloud.ShardLeaderElectionC
When I used 4.0 I could use my DIH on any shard and the documents would be
distributed based on the internal hashing algorithm and end up distributed
evenly across my three shards.
I have just upgraded to Solr 4.1 and I have noticed that my documents always
end up on the shard that I run the DIH o
I found this: https://issues.apache.org/jira/browse/SOLR-2592
And tried adding the following to my solrconfig.xml
groupid
false
However I am still getting all documents added to the shard that I run the
DIH on.
According to this comment on the tracker the default sharding should b
Also this doesn't seem like a problem with DIH specifically. I tried doing
JSON updates with the same result. Documents are only added to the shard
that I send the updates to.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-1-Custom-Hashing-DIH-tp4036316p4036332.html
I am using the coreadmin to assign nodes to the cluster. How can I tell the
collection how many shards it has without using the collections api or
boostrapping
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-4-1-Custom-Hashing-DIH-tp4036316p4036340.html
Sent from the So
Ok that worked thank you!
Now it seems like it is still using the default hashing function.
I put this in my solrconfig.xml under the tag
groupid
false
I want to shard on groupid instead of id but it doesn't seem to be working.
--
View this message in context:
http://lucene.472
I'm not sure I understand. I thought ID had to be unique.
for example
I have the following
[
{ "id" : 1, "groupid" : 1 },
{ "id" : 2, "groupid" : 1},
{ "id" : 3 "groupid : 2 }
]
How would I currently ensure that id 1 & 2 end up on the same shard using
the ID ?
--
View this message in context:
23 matches
Mail list logo