Thanks Eric for your suggestion.
It helped me by increasing the znode data size from 1M to 2M.
Here is the reference for the same to change this configuration:
https://zookeeper.apache.org/doc/r3.3.2/zookeeperAdmin.html
I used this parameter in the JAVA_OPTS -Djute.maxbuffer=2M which helped me
to
I am also facing this issue recently.
Any solution to fix this issue? I have almost 3000+ core created and adding
some more.
Please suggest if there is restriction on the core numbers and shard and
collection.
Here is trace:
Jun 23, 2014 9:01:45 AM org.apache.solr.common.SolrException log
SEVERE
One thing that you can try is optimize incrementally. Instead of optimizing
to 1 segment, optimize to 100, then 50 , 25, 10 ,5 ,2 ,1
After each step, the index size should go down. This way you dont have to
wait 7 hours to get some results.
Pravin
On Fri, Jun 14, 2013 at 10:45 AM, Viresh Modi
Hi Viresh,
How much free disc space do you have? if you have dont have enough space
on disc, optimization process stops and rollsback to some intermediate
state.
Pravin
On Fri, Jun 14, 2013 at 2:50 AM, Viresh Modi wrote:
> Hi Rafal
>
> Here i attached solr index file snapsho
this index further on 4 shards to reduce the index size
per shard.
Need to ask few more questions that we have a large number of unique terms in
our index so whether facet method fc is better or enum? and
Can a large facet.enum.cache.minDf value help ?
Thanks,
Pravin Agrawal
-Original
l increase further.
Please let us know if there is any better way to extract (site, term,
frequency) information compare to current method.
Thanks,
Pravin Agrawal
DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the
property of Persistent Sys
Pravin Agrawal
DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the
property of Persistent Systems Ltd. It is intended only for the use of the
individual or entity to which it is addressed. If you are not the intended
recipient, you are not authorized to
0 | phrase_name:"aaa
bbb"~5^3.0 | phrase_name:"mmm nnn"~5^3.0 | phrase_name:"aaabbb"~5^3.0)~0.5
However it's not the case.
Please let me know if I am missing something or its expected behavior. Also
please let me know what should be done to get my des
Hi James,
Thanks a lot for your reply. The workaround that you suggested is working fine
of me. Hope to see this enhancement in future releases of solr.
-Pravin
From: Dyer, James [james.d...@ingrambook.com]
Sent: Monday, December 19, 2011 11:11 PM
To
ng fieldtype
Thanks
Pravin
DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the
property of Persistent Systems Ltd. It is intended only for the use of the
individual or entity to which it is addressed. If you are not the intended
re
Hello,
Andy, so did you get final answer to your quetion?
I am also trying to do something similar. Please give me pointers if you
have any.
Basically even I need to use Ngram with WhitespaceTokenizer any help will be
appreciated.
--
View this message in context:
http://lucene.472066.n3.nabble.c
just have copy solr war to Hadoop cluster or what else ?
(Note: I have two setup :
1. Hadoop setup
2. Solr setup)
So to run distributed indexing how to bridge these two setup?
Thanks
-Pravin
-Original Message-
From: Jason Rutherglen [mailto:jason.rutherg...@gmail.com]
Sent: Friday
Hi,
I am using SOLR-1301 path. I have build the solr with given patch.
But I am not able to configure Hadoop for above war.
I want to run solr(create index) with 3 nodes (1+2) cluster.
How to do the Hadoop configurations for above patch?
How to set master and slave?
Thanks
-Pravin
distribute index storage ?
On Fri, Oct 9, 2009 at 6:10 PM, Pravin Karne
wrote:
> Hi,
> I am new to solr. I have configured solr successfully and its working
> smoothly.
>
> I have one query:
>
> I want index large data(around 100GB).So can we store these indexes on
> differen
storage ?
On Fri, Oct 9, 2009 at 6:10 PM, Pravin Karne
wrote:
> Hi,
> I am new to solr. I have configured solr successfully and its working
> smoothly.
>
> I have one query:
>
> I want index large data(around 100GB).So can we store these indexes on
> different machine as distrib
-Pravin
DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the
property of Persistent Systems Ltd. It is intended only for the use of the
individual or entity to which it is addressed. If you are not the intended
recipient, you are not authorized to
Hi
I have index data with Lucene. I want to deploy this indexes on solr for search.
Generally we index and search data with Solr, but now I want to just search
with Lucene indexes.
How can we do this ?
-Pravin
DISCLAIMER
==
This e-mail may contain privileged and confidential
from server2
how to solve this..
Is any other setting is required for this ?
Thanks in advance
-Pravin
-Original Message-
From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
Sent: Wednesday, October 07, 2009 3:37 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Quries
Firs
advance.
-Pravin
-Original Message-
From: Sandeep Tagore [mailto:sandeep.tag...@gmail.com]
Sent: Wednesday, October 07, 2009 4:29 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr Quries
Hi Pravin,
1. Is solr work in distributed environment ? if yes, how to configure it?
Yep. You can
dividing small file in small files..and it's working
Is there any other way to post large file as above work around is not feasible
for 1 TB file
Thanks
-Pravin
DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the
property of Persistent Systems Lt
more headache )
Thanks in advance
-Pravin
DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the
property of Persistent Systems Ltd. It is intended only for the use of the
individual or entity to which it is addressed. If you are not the
after adding documents?
If you are, you might want to compare the size of each document being
currently indexed with the ones you indexed a few months back.
To optimize the index, simply post to Solr. Or read
[http://wiki.apache.org/solr/SolrOperationsTools]
Pravin
2009/9/30 swapna_here :
>
Also, what is your merge factor set to?
Pravin
2009/9/30 Pravin Paratey :
> Swapna,
>
> Your answers are inline.
>
> 2009/9/30 swapna_here :
>>
>> hi all,
>>
>> I have indexed 10 documents (daily around 5000 documents will be indexed
>> one at
You may want to check out - http://code.google.com/p/solrnet/
2009/9/30 Antonio Calò :
> Hi All
>
> I'm wondering if is already available a Solr version for .Net or if it is
> still under development/planning. I've searched on Solr website but I've
> found only info on Lucene .Net project.
>
> Bes
ciated
>
> Thanks in advance..
>
> --
> View this message in context:
> http://www.nabble.com/delay-while-adding-document-to-solr-index-tp25676777p25676777.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
Hope this helps
Pravin
AFAIK, you're going to have to code something up. Do remember to add CDATA
tags to your xml.
On Tue, Mar 10, 2009 at 11:31 PM, KennyN wrote:
>
> This functionality is possible 'out of the box', right? Or am I going to
> need
> to code up something that reads in the id named files and generates t
26 matches
Mail list logo