Hello,
Can we create a collection across data Center ( shard replica is in a
different data center)
for HA ?
Thanks
Revas
Thanks, Erick. Its just when we enable both index=true and docValues=true,
it increases the index time by 2x atleast for full re-index.
On Wed, May 20, 2020 at 2:30 PM Erick Erickson
wrote:
> Revas:
>
> Facet queries are just queries that are constrained by the total result
>
Erick, Can you also explain how to optimize facet query and range facets as
they dont use docValues and contribute to higher response time?
On Tue, May 19, 2020 at 5:55 PM Erick Erickson
wrote:
> They are _absolutely_ able to be used together. Background:
>
> “In the bad old days”, there was no
re spaced far enough apart that the warming completes before a new
> searcher starts warming.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
>
> On Mon, May 4, 2020 at 10:27 AM Revas wrote:
>
> > Hi Erick, Thanks for the explanation and advise. With face
t get adequate response time even under fairly light query loads
> as a general rule.
>
> Best,
> Erick
>
> > On Apr 16, 2020, at 12:08 PM, Revas wrote:
> >
> > Hi Erick, You are correct, we have only about 1.8M documents so far and
> > turning on the inde
t I think that’s the root issue here.
>
> Best,
> Erick
>
> > On Apr 14, 2020, at 11:51 PM, Revas wrote:
> >
> > We have faceting fields that have been defined as indexed=false,
> > stored=false and docValues=true
> >
> > However we use a lot of subfa
We have faceting fields that have been defined as indexed=false,
stored=false and docValues=true
However we use a lot of subfacets using json facets and facet ranges
using facet.queries. We see that after every soft-commit our performance
worsens and performs ideal between commits
how is that d
Hi
I am seeing from my logs searcher referenced as main and realtime .Do they
correspond to hard vs sofCommit. I do not see the co-relation to that based
on our commit settings.
Opening [Searcher@538abc62[xx_shard1_replica2] main]
Opening [Searcher@2e151991[ xx _shard1_replica1] realtime]
Thank
wered
> machine. The hard commit will trigger segment merging, which is CPU and
> I/O intensive. If
> you’re using a machine that can’t afford the cycles to be taken up by
> merging, that could account
> for what you see, but new searchers are being opened every 2 seconds
>
towarming. That should smooth out
> the delay your user’s experience when commits happen.
>
> Best,
> Erick
>
> > On Mar 30, 2020, at 4:06 PM, Revas wrote:
> >
> > Thanks, Eric.
> >
> > 1) We are using dynamic string field for faceting where indexing
ery 2 secs.
On Mon, Mar 30, 2020 at 4:06 PM Revas wrote:
> Thanks, Eric.
>
> 1) We are using dynamic string field for faceting where indexing =false
> and stored=false . By default docValues are enabled for primitive fields
> (solr 6.6.), so not explicitly defined in schema. Do
I’d double check <1> first.
>
> Best,
> Erick
>
> > On Mar 30, 2020, at 12:20 PM, sujatha arun wrote:
> >
> > A facet heavy query which uses docValue fields for faceting returns
> about
> > 5k results executes between 10ms to 5 secs and the 5 secs time seems to
> > coincide with after a hard commit.
> >
> > Does that have any relation? Why the fluctuation in execution time?
> >
> > Thanks,
> > Revas
>
>
Thanks for the repsonse .What happens in this scenario?
Does the commit happen in this case or does the search server hang or just
throws an error without committing
Regards
Sujatha
On Mon, May 3, 2010 at 11:41 PM, Chris Hostetter
wrote:
> : When i run 2 -3 commits parallely to diff instances
PERFORMANCE WARNING: Overlapping onDeckSearchers=2
What is the Best approach to solve this
Regards
revas
Thanks ,Erik.
On Fri, Jan 8, 2010 at 4:34 PM, Erik Hatcher wrote:
>
> On Jan 8, 2010, at 4:14 AM, revas wrote:
>
>> I would like to know if by just copying the solr.war file to my existing
>> solr 1.3 installation ,lucene version is also upgraded to current 2.9 ?
>
Hello,
I would like to know if by just copying the solr.war file to my existing
solr 1.3 installation ,lucene version is also upgraded to current 2.9 ?
I believe reindex is not necessary ,is that correct?
Is there anything else apart form this that i need to do to upgrade to the
latest lucen
*simple:peRsonal*
* * *simple:peRsonal*
* * *MultiPhraseQuery(simple:"(person pe)
rsonal")*
* * *simple:"(person pe) rsonal"*
what is this multiphrase query ,why is this a phrase query istead of simple
query?
Regards
Revas
y both
query and index analyzer .Is this correct?
Regards
Revas
the above
for German language analysis am i to use the standardard anlyzer with
German filter factory and German stemmers ?
are there more language specific tokenizers in lucene and if so what are the
steps to integrate into solr?
Regards
Revas
Hi Michael,
What is GNU gettext and how this can be used in a multilanguage scenario?
Regards
Revas
On Wed, Jun 10, 2009 at 8:10 PM, Michael Ludwig wrote:
> Manepalli, Kalyan schrieb:
>
>> Hi,
>> I am trying to customize the response that I receive from Solr. In the
>&g
Thanks
On Tue, Jun 9, 2009 at 5:14 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> On Tue, Jun 9, 2009 at 4:32 PM, revas wrote:
>
> > Thanks ShalinWhen we use the external file dictionary (if there is
> > one),then it should work fine ,right for spell
2009 at 2:56 PM, revas wrote:
>
> > But the spell check componenet uses the n-gram analyzer and henc should
> > work
> > for any language ,is this correct ,also we can refer an extern dictionary
> > for suggestions ,could this be in any language?
> >
>
> Yes
But the spell check componenet uses the n-gram analyzer and henc should work
for any language ,is this correct ,also we can refer an extern dictionary
for suggestions ,could this be in any language?
The open files is not because of spell check as we have not yet implemented
this yet, every time we
,will the open
files be closed automatically or would i have to reindex to close the open
files or how do i close the already opened files.This is on linux with solr
1.3 and tomcat 5.5
Regards
Revas
On Sat, Jun 6, 2009 at 11:40 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> On Sat, May 30, 2009 at 9:48 AM, revas wrote:
>
> > Hi ,
> >
> > When i give a query like the following ,why does it become a phrase query
> > as shown below?
> > T
in not in windows OS.
Any pointers
Regards
Revas
What is the draw back in using compunf file format for indexing when we have
several webapps in a sinle container
Regards
Sujatha
Regards
Revas
above languages .Will search be
same across all the above cases?
thanks
revas
Hi ,
With respect to language support in solr ,we have analyzers for some
languages and stemmers for certain langauges.Do we say that solr supports
this particular language only if we have both analyzer and stemmer for the
language or also for which we have analyzer but not stemmer
Regards
Suja
Hi,
I typically issue a facetdrill down query thus
q=somequery and Facetfield:facetval .
Is there any issues with the above approach as opposed to
&fq=facetfield:value in terms of memory consumption and the use of cache.
Regards
Suajatha
If i don't explicity set any default query in the solrconfig.xml for
caching and make use of the default config file,does solr do the caching
automatically based on the query?
Thanks
you do need to reindex after removing the stopword filter
> from the configuration. When you indexed the first time using
> the stopword filter, the words were NOT indexed, so they won't
> be found now that they're getting through the query analyzer.
>
> Best
> Erick
>
ish?
>
> Best
> Erick
>
>
>
> On Tue, Mar 17, 2009 at 7:40 AM, revas wrote:
>
> > Hi,
> >
> > I have a query like this
> >
> > content:the AND iuser_id:5
> >
> > which means return all docs of user id 5 which have the word "the&q
Hi,
I have a query like this
content:the AND iuser_id:5
which means return all docs of user id 5 which have the word "the" in
content .Since 'the' is a stop word ,this query executes as just user_id :5
inspite of the "AND" clause ,Whereas the expected result here is since there
is no result for
Hi,
If i were to add a second server for sharding once ,the first server reaches
its limit and then if i need to update any document,how can i figure out on
which server the document is located?
Regards
Sujatha
Hi,
I am trying to do amulticore set up..
I added the following from the 1.3 solr download to new dir called multicore
core0 ,core1,solr.xml and solr.war
in the tomcat context fragment i have defined as
http://localhost:8080/multicore/admin
http://localhost:8080/multicore/admin/core0
Th
Hi,
I just want to confirm my understanding of luke request handler.
It gives us the raw lucene index tokens on a field by field basis.
What should be the query to return all tokens for a field .
Is there any way to return all the token across all fields
Regards
Revas
The luke request handler returns all the tokens from the index ,is this
correct?
On 3/5/09, revas wrote:
>
> We will be using sqllite for db.This can be used for a cd version where we
> need to provide search
>
>
> On 3/5/09, Grant Ingersoll wrote:
>>
>>
>&g
We will be using sqllite for db.This can be used for a cd version where we
need to provide search
On 3/5/09, Grant Ingersoll wrote:
>
>
> On Mar 5, 2009, at 3:10 AM, revas wrote:
>
> Hi,
>>
>> I have a requirement where i need to search offline.We are thinking of
ight
> > need a different URL depending on the version of Tomcat you're
> > running).
> >
> > Michael
> >
> > On Wed, Feb 25, 2009 at 11:42 AM, revas wrote:
> > > thanks will try that .I also have the war file for each solr instance
> in
> >
Hi,
If i need to change the lucene version of solr ,then how can we do this?
Regards
Revas
Zend lucene is not able to
open the solr index files ,the error being unsupported format.
The final option is to reindex using zend lucene and read the index tokens
,but then facets are not supported by zend-lucene
Any body done something similar,please give your thoughts or pointers
Regards
Revas
thanks will try that .I also have the war file for each solr instance in the
home directory of the instance ,would that be the problem ?
if i were to have common war file for n instances ,would there be any issue?
regards
revas
On 2/25/09, Michael Della Bitta wrote:
>
> It's possi
Hi
I am sure this question has been repeated many times over and there has been
several generic answers ,but i am looking for specific answers.
I have a single server whose configuration i give below,this being the only
server we have at present ,the requirement is everytime we create a new
websi
45 matches
Mail list logo