Hi,
I am currently benchmarking solr index with different fields to see the
impact on its size/ search speed etc. A feature to find the disk usage per
field of index would be really handy and save me alot of time. Do we have
any updates on this?
Has anyone tried writing custom code for it ?
-
Hey Robert,
Just wondering if you ever got to solve this problem?
We are facing a similar issue with our catalog search :(
look forward to hearing from you.
-Thanks,
Muneeb
--
View this message in context:
http://lucene.472066.n3.nabble.com/Searching-for-words-with-accented-characters-tp4863
Thanks for your input guys. I will surely try these suggestions, in
particular, reducing heap size JAVA_OPTION and adjusting cache sizes to see
if that makes a difference.
I am also considering upgrading RAM for slave nodes, and also looking into
moving from SATA enterprise HDD to SSD flash/DRAM
First, thanks very much for a prompt reply. Here is more info:
===
a) What operating system?
Debian GNU/Linux 5.0
b) What Java container (Tomcat/Jetty)
Jetty
c) What JAVA_OPTIONS? I.e. memory, garbage collection etc.
-Xmx9000m -DDEBUG -Djava.awt.headless=true
-Dorg.mortbay.
Hi All,
I need some guidance over improving search response time for our catalog
search. we are using solr 1.4.0 version and have master/slave setup (3
dedicated servers, one being the master and other two slaves). The server
specs are as follows:
Quad Core 2.5Ghz 1333mhz
12GB Ram
2x250GB disks
Where do these lines go in solr config?
5000
1
Thanks,
-Mueeb
--
View this message in context:
http://lucene.472066.n3.nabble.com/slave-index-is-bigger-than-master-index-tp996329p1003903.html
Sent from the Solr - User mailing list archive at Nabble.com.
Well I do have disk limitations too, and thats why I think slave nodes died,
when replicating data from master node. (as it was just adding on top of
existing index files).
What do you mean here? Optimizing is too CPU expensive?
What I meant by avoid playing around with slave nodes is that doing
>> In solrconfig.xml, these two lines control that. Maybe they need to be
increased.
>> 5000
>> 1
Where do I add those in solrconfig? These lines doesn't seem to be present
in the example solrconfig file...
--
View this message in context:
http://lucene.472066.n3.nabble.com/slave-i
Well I do have disk limitations too, and thats why I think slave nodes died,
when replicating data from master node. (as it was just adding on top of
existing index files).
:: What do you mean here? Optimizing is too CPU expensive?
What I meant by avoid playing around with slave nodes is that d
We have three dedicated servers for solr, two for slaves and one for master,
all with linux/debian packages installed.
I understand that replication does always copies over the index in an exact
form as in master index directory (or it is supposed to do that at least),
and if the master index wa
No I didn't. I thought you aren't supposed to run optimize on slaves. Well
but it doesn;t matter now, as I think its fixed now. I just added a dummy
document on master, ran a commit call and then once that executed ran an
optimize call. This triggered snapshooter to replicate the index, which
some
I just checked my config file, and I do have exact same values for
deletionPolicy tag, as you attached in your email, so I dont really think it
could be this.
--
View this message in context:
http://lucene.472066.n3.nabble.com/slave-index-is-bigger-than-master-index-tp996329p996373.html
Sent fr
Yes I always run an optimize whenever I index on master. In fact I just ran
an optimize command an hour ago, but it didn't make any difference.
--
View this message in context:
http://lucene.472066.n3.nabble.com/slave-index-is-bigger-than-master-index-tp996329p996364.html
Sent from the Solr - Us
Hi,
I am using Solr 1.4 version, with master-slave setup. We have one master
slave and two slave servers. It was all working fine, but lately solr slaves
are behaving strange. Particularly during replicating the index, the slave
nodes die and always need a restart. Also the index size of slave no
Hi Blargy,
Nice to hear that I am not alone ;)
Well we have been using Hadoop for other data-intensive services, those that
can be done in parallel. We have multiple nodes, which are used by Hadoop
for all our MapReduce jobs. I personally don't have much experience with its
use and hence wouldn
15 matches
Mail list logo