Sort not working in solr

2014-07-15 Thread madhav bahuguna
Iam trying to sort my records but the result i get is not correct
My url query--
http://localhost:8983/solr/select/?&q=*:*&fl=business_point&sort=business_point+desc

Iam trying to sort my records by business_points but the result i get is in
like this
9
8
7
6
5
45
4
4
10
1

Whys am i getting my results in the wrong order
my schema looks like this


-- 
Regards
Madhav Bahuguna


Re: Sort not working in solr

2014-07-15 Thread Alexandre Rafalovitch
That's always what happens (not just in Solr) when you store numbers
as text. You could store them as text-sortable numbers with leading
zeros (e.g. 04 vs. 45), but then what happens when you hit a 100?
Alternatively, if that field has only numbers, index it as a numeric
type. Make sure to use one of the new types (see recent Solr example),
not the old type that also has the same problem.

Regards,
  Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On Tue, Jul 15, 2014 at 2:24 PM, madhav bahuguna
 wrote:
> Iam trying to sort my records but the result i get is not correct
> My url query--
> http://localhost:8983/solr/select/?&q=*:*&fl=business_point&sort=business_point+desc
>
> Iam trying to sort my records by business_points but the result i get is in
> like this
> 9
> 8
> 7
> 6
> 5
> 45
> 4
> 4
> 10
> 1
>
> Whys am i getting my results in the wrong order
> my schema looks like this
>  stored="true" required="false" multiValued="false"/>
>
> --
> Regards
> Madhav Bahuguna


Re: Sort not working in solr

2014-07-15 Thread スガヌマヨシカズ
i think type="text_general" make it charactered-sort for numbers.

How about make it as type="int" or type="long" instead of "text_general"?





Regards,
suganuma


2014-07-15 16:24 GMT+09:00 madhav bahuguna :

> Iam trying to sort my records but the result i get is not correct
> My url query--
>
> http://localhost:8983/solr/select/?&q=*:*&fl=business_point&sort=business_point+desc
>
> Iam trying to sort my records by business_points but the result i get is in
> like this
> 9
> 8
> 7
> 6
> 5
> 45
> 4
> 4
> 10
> 1
>
> Whys am i getting my results in the wrong order
> my schema looks like this
>  stored="true" required="false" multiValued="false"/>
>
> --
> Regards
> Madhav Bahuguna
>


Re: Of, To, and Other Small Words

2014-07-15 Thread Aman Tandon
Hi jack,


it will use the internal *Lucene hardwired list* of stop words


I am unaware of this, could you please provide the more information about
this.


With Regards
Aman Tandon


On Tue, Jul 15, 2014 at 7:21 AM, Alexandre Rafalovitch 
wrote:

> You could try experimenting with CommonGramsFilterFactory and
> CommonGramsQueryFilter (slightly different). There is actually a lot
> of cool analyzers bundled with Solr. You can find full list on my site
> at: http://www.solr-start.com/info/analyzers
>
> Regards,
>Alex.
> Personal: http://www.outerthoughts.com/ and @arafalov
> Solr resources: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On Tue, Jul 15, 2014 at 8:42 AM, Teague James 
> wrote:
> > Alex,
> >
> > Thanks! Great suggestion. I figured out that it was the
> EdgeNGramFilterFactory. Taking that out of the mix did it.
> >
> > -Teague
> >
> > -Original Message-
> > From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
> > Sent: Monday, July 14, 2014 9:14 PM
> > To: solr-user
> > Subject: Re: Of, To, and Other Small Words
> >
> > Have you tried the Admin UI's Analyze screen. Because it will show you
> what happens to the text as it progresses through the tokenizers and
> filters. No need to reindex.
> >
> > Regards,
> >Alex.
> > Personal: http://www.outerthoughts.com/ and @arafalov Solr resources:
> http://www.solr-start.com/ and @solrstart Solr popularizers community:
> https://www.linkedin.com/groups?gid=6713853
> >
> >
> > On Tue, Jul 15, 2014 at 8:10 AM, Teague James 
> wrote:
> >> Hi Anshum,
> >>
> >> Thanks for replying and suggesting this, but the field type I am using
> (a modified text_general) in my schema has the file set to 'stopwords.txt'.
> >>
> >>  positionIncrementGap="100">
> >> 
> >>  class="solr.StandardTokenizerFactory"/>
> >>  ignoreCase="true" words="stopwords.txt" />
> >> 
> >> 
> >> 
> >>  minGramSize="3" maxGramSize="10" />
> >> 
> >> 
> >> 
> >> 
> >>  class="solr.StandardTokenizerFactory"/>
> >>  ignoreCase="true" words="stopwords.txt" />
> >>  synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
> >> 
> >> 
> >> 
> >> 
> >> 
> >>
> >> Just to be double sure I cleared the list in stopwords_en.txt,
> restarted Solr, re-indexed, and searched with still zero results. Any other
> suggestions on where I might be able to control this behavior?
> >>
> >> -Teague
> >>
> >>
> >> -Original Message-
> >> From: Anshum Gupta [mailto:ans...@anshumgupta.net]
> >> Sent: Monday, July 14, 2014 4:04 PM
> >> To: solr-user@lucene.apache.org
> >> Subject: Re: Of, To, and Other Small Words
> >>
> >> Hi Teague,
> >>
> >> The StopFilterFactory (which I think you're using) by default uses
> lang/stopwords_en.txt (which wouldn't be empty if you check).
> >> What you're looking at is the stopword.txt. You could either empty that
> file out or change the field type for your field.
> >>
> >>
> >> On Mon, Jul 14, 2014 at 12:53 PM, Teague James <
> teag...@insystechinc.com> wrote:
> >>> Hello all,
> >>>
> >>> I am working with Solr 4.9.0 and am searching for phrases that
> >>> contain words like "of" or "to" that Solr seems to be ignoring at
> index time.
> >>> Here's what I tried:
> >>>
> >>> curl http://localhost/solr/update?commit=true -H "Content-Type:
> text/xml"
> >>> --data-binary '100 >>> name="content">blah blah blah knowledge of science blah blah
> >>> blah'
> >>>
> >>> Then, using a broswer:
> >>>
> >>> 
> >>> i
> >>> d:100
> >>>
> >>> I get zero hits. Search for "knowledge" or "science" and I'll get hits.
> >>> "knowledge of" or "of science" and I get zero hits. I don't want to
> >>> use proximity if I can avoid it, as this may introduce too many
> >>> undesireable results. Stopwords.txt is blank, yet clearly Solr is
> ignoring "of" and "to"
> >>> and possibly more words that I have not discovered through testing
> >>> yet. Is there some other configuration file that contains these small
> >>> words? Is there any way to force Solr to pay attention to them and
> >>> not drop them from the phrase? Any advice is appreciated! Thanks!
> >>>
> >>> -Teague
> >>>
> >>>
> >>
> >>
> >>
> >> --
> >>
> >> Anshum Gupta
> >> http://www.anshumgupta.net
> >>
> >
>


Re: Of, To, and Other Small Words

2014-07-15 Thread Alexandre Rafalovitch
https://github.com/apache/lucene-solr/blob/lucene_solr_4_9_0/lucene/analysis/common/src/java/org/apache/lucene/analysis/core/StopAnalyzer.java#L51

If you don't set the attribute in XML file, it falls back to the
default definitions.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On Tue, Jul 15, 2014 at 3:16 PM, Aman Tandon  wrote:
> Hi jack,
>
>
> it will use the internal *Lucene hardwired list* of stop words
>
>
> I am unaware of this, could you please provide the more information about
> this.
>
>
> With Regards
> Aman Tandon
>
>
> On Tue, Jul 15, 2014 at 7:21 AM, Alexandre Rafalovitch 
> wrote:
>
>> You could try experimenting with CommonGramsFilterFactory and
>> CommonGramsQueryFilter (slightly different). There is actually a lot
>> of cool analyzers bundled with Solr. You can find full list on my site
>> at: http://www.solr-start.com/info/analyzers
>>
>> Regards,
>>Alex.
>> Personal: http://www.outerthoughts.com/ and @arafalov
>> Solr resources: http://www.solr-start.com/ and @solrstart
>> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>>
>>
>> On Tue, Jul 15, 2014 at 8:42 AM, Teague James 
>> wrote:
>> > Alex,
>> >
>> > Thanks! Great suggestion. I figured out that it was the
>> EdgeNGramFilterFactory. Taking that out of the mix did it.
>> >
>> > -Teague
>> >
>> > -Original Message-
>> > From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
>> > Sent: Monday, July 14, 2014 9:14 PM
>> > To: solr-user
>> > Subject: Re: Of, To, and Other Small Words
>> >
>> > Have you tried the Admin UI's Analyze screen. Because it will show you
>> what happens to the text as it progresses through the tokenizers and
>> filters. No need to reindex.
>> >
>> > Regards,
>> >Alex.
>> > Personal: http://www.outerthoughts.com/ and @arafalov Solr resources:
>> http://www.solr-start.com/ and @solrstart Solr popularizers community:
>> https://www.linkedin.com/groups?gid=6713853
>> >
>> >
>> > On Tue, Jul 15, 2014 at 8:10 AM, Teague James 
>> wrote:
>> >> Hi Anshum,
>> >>
>> >> Thanks for replying and suggesting this, but the field type I am using
>> (a modified text_general) in my schema has the file set to 'stopwords.txt'.
>> >>
>> >> > positionIncrementGap="100">
>> >> 
>> >> > class="solr.StandardTokenizerFactory"/>
>> >> > ignoreCase="true" words="stopwords.txt" />
>> >> 
>> >> 
>> >> 
>> >> > minGramSize="3" maxGramSize="10" />
>> >> 
>> >> 
>> >> 
>> >> 
>> >> > class="solr.StandardTokenizerFactory"/>
>> >> > ignoreCase="true" words="stopwords.txt" />
>> >> > synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
>> >> 
>> >> 
>> >> 
>> >> 
>> >> 
>> >>
>> >> Just to be double sure I cleared the list in stopwords_en.txt,
>> restarted Solr, re-indexed, and searched with still zero results. Any other
>> suggestions on where I might be able to control this behavior?
>> >>
>> >> -Teague
>> >>
>> >>
>> >> -Original Message-
>> >> From: Anshum Gupta [mailto:ans...@anshumgupta.net]
>> >> Sent: Monday, July 14, 2014 4:04 PM
>> >> To: solr-user@lucene.apache.org
>> >> Subject: Re: Of, To, and Other Small Words
>> >>
>> >> Hi Teague,
>> >>
>> >> The StopFilterFactory (which I think you're using) by default uses
>> lang/stopwords_en.txt (which wouldn't be empty if you check).
>> >> What you're looking at is the stopword.txt. You could either empty that
>> file out or change the field type for your field.
>> >>
>> >>
>> >> On Mon, Jul 14, 2014 at 12:53 PM, Teague James <
>> teag...@insystechinc.com> wrote:
>> >>> Hello all,
>> >>>
>> >>> I am working with Solr 4.9.0 and am searching for phrases that
>> >>> contain words like "of" or "to" that Solr seems to be ignoring at
>> index time.
>> >>> Here's what I tried:
>> >>>
>> >>> curl http://localhost/solr/update?commit=true -H "Content-Type:
>> text/xml"
>> >>> --data-binary '100> >>> name="content">blah blah blah knowledge of science blah blah
>> >>> blah'
>> >>>
>> >>> Then, using a broswer:
>> >>>
>> >>> 
>> >>> i
>> >>> d:100
>> >>>
>> >>> I get zero hits. Search for "knowledge" or "science" and I'll get hits.
>> >>> "knowledge of" or "of science" and I get zero hits. I don't want to
>> >>> use proximity if I can avoid it, as this may introduce too many
>> >>> undesireable results. Stopwords.txt is blank, yet clearly Solr is
>> ignoring "of" and "to"
>> >>> and possibly more words that I have not discovered through testing
>> >>> yet. Is there some other configuration file that contai

Re: Slow inserts when using Solr Cloud

2014-07-15 Thread ian
Hi Mark

Thanks for replying to my post.  Would you know whether my findings are
consistent with what other people see when using SolrCloud?

One thing I want to investigate is whether I can route my updates to the
correct shard in the first place, by having my client using the same hashing
logic as Solr, and working out in advance which shard my inserts should be
sent to.  Do you know whether that's an approach that others have used?

Thanks again
Ian



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Slow-inserts-when-using-Solr-Cloud-tp4146087p4147183.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Sort not working in solr

2014-07-15 Thread Apoorva Gaurav
In fact its better using TrieIntField instead of IntField.
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201301.mbox/%3ccab_8yd9yp259kk4ciybbprjcpwqp6vd7yvrtjr1eubew_ky...@mail.gmail.com%3E
http://stackoverflow.com/questions/13372323/what-is-the-correct-solr-fieldtype-to-use-for-sorting-integer-values


On Tue, Jul 15, 2014 at 1:09 PM, スガヌマヨシカズ  wrote:

> i think type="text_general" make it charactered-sort for numbers.
>
> How about make it as type="int" or type="long" instead of "text_general"?
>
> 
>  stored="true" required="false" multiValued="false"/>
> 
>
> Regards,
> suganuma
>
>
> 2014-07-15 16:24 GMT+09:00 madhav bahuguna :
>
> > Iam trying to sort my records but the result i get is not correct
> > My url query--
> >
> >
> http://localhost:8983/solr/select/?&q=*:*&fl=business_point&sort=business_point+desc
> >
> > Iam trying to sort my records by business_points but the result i get is
> in
> > like this
> > 9
> > 8
> > 7
> > 6
> > 5
> > 45
> > 4
> > 4
> > 10
> > 1
> >
> > Whys am i getting my results in the wrong order
> > my schema looks like this
> >  > stored="true" required="false" multiValued="false"/>
> >
> > --
> > Regards
> > Madhav Bahuguna
> >
>



-- 
Thanks & Regards,
Apoorva


Re: Of, To, and Other Small Words

2014-07-15 Thread Jack Krupansky
Yeah, this is another one of those places where the behavior of Solr is 
defined but way down in the Lucene Javadoc, where no Solr user should ever 
have to go!


It's also the kind of detail documented in my Solr Deep Dive e-book:
http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive-early-access-release-7/ebook/product-21203548.html

-- Jack Krupansky

-Original Message- 
From: Alexandre Rafalovitch

Sent: Tuesday, July 15, 2014 4:36 AM
To: solr-user
Subject: Re: Of, To, and Other Small Words

https://github.com/apache/lucene-solr/blob/lucene_solr_4_9_0/lucene/analysis/common/src/java/org/apache/lucene/analysis/core/StopAnalyzer.java#L51

If you don't set the attribute in XML file, it falls back to the
default definitions.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On Tue, Jul 15, 2014 at 3:16 PM, Aman Tandon  
wrote:

Hi jack,


it will use the internal *Lucene hardwired list* of stop words


I am unaware of this, could you please provide the more information about
this.


With Regards
Aman Tandon


On Tue, Jul 15, 2014 at 7:21 AM, Alexandre Rafalovitch 


wrote:


You could try experimenting with CommonGramsFilterFactory and
CommonGramsQueryFilter (slightly different). There is actually a lot
of cool analyzers bundled with Solr. You can find full list on my site
at: http://www.solr-start.com/info/analyzers

Regards,
   Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On Tue, Jul 15, 2014 at 8:42 AM, Teague James 
wrote:
> Alex,
>
> Thanks! Great suggestion. I figured out that it was the
EdgeNGramFilterFactory. Taking that out of the mix did it.
>
> -Teague
>
> -Original Message-
> From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
> Sent: Monday, July 14, 2014 9:14 PM
> To: solr-user
> Subject: Re: Of, To, and Other Small Words
>
> Have you tried the Admin UI's Analyze screen. Because it will show you
what happens to the text as it progresses through the tokenizers and
filters. No need to reindex.
>
> Regards,
>Alex.
> Personal: http://www.outerthoughts.com/ and @arafalov Solr resources:
http://www.solr-start.com/ and @solrstart Solr popularizers community:
https://www.linkedin.com/groups?gid=6713853
>
>
> On Tue, Jul 15, 2014 at 8:10 AM, Teague James 
> 

wrote:
>> Hi Anshum,
>>
>> Thanks for replying and suggesting this, but the field type I am using
(a modified text_general) in my schema has the file set to 
'stopwords.txt'.

>>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> Just to be double sure I cleared the list in stopwords_en.txt,
restarted Solr, re-indexed, and searched with still zero results. Any 
other

suggestions on where I might be able to control this behavior?
>>
>> -Teague
>>
>>
>> -Original Message-
>> From: Anshum Gupta [mailto:ans...@anshumgupta.net]
>> Sent: Monday, July 14, 2014 4:04 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Of, To, and Other Small Words
>>
>> Hi Teague,
>>
>> The StopFilterFactory (which I think you're using) by default uses
lang/stopwords_en.txt (which wouldn't be empty if you check).
>> What you're looking at is the stopword.txt. You could either empty 
>> that

file out or change the field type for your field.
>>
>>
>> On Mon, Jul 14, 2014 at 12:53 PM, Teague James <
teag...@insystechinc.com> wrote:
>>> Hello all,
>>>
>>> I am working with Solr 4.9.0 and am searching for phrases that
>>> contain words like "of" or "to" that Solr seems to be ignoring at
index time.
>>> Here's what I tried:
>>>
>>> curl http://localhost/solr/update?commit=true -H "Content-Type:
text/xml"
>>> --data-binary '100>> name="content">blah blah blah knowledge of science blah blah
>>> blah'
>>>
>>> Then, using a broswer:
>>>
>>> 
>>> i
>>> d:100
>>>
>>> I get zero hits. Search for "knowledge" or "science" and I'll get 
>>> hits.

>>> "knowledge of" or "of science" and I get zero hits. I don't want to
>>> use proximity if I can avoid it, as this may introduce too many
>>> undesireable results. Stopwords.txt is blank, yet clearly Solr is
ignoring "of" and "to"
>>> and possibly more words that I have not discovered through testing
>>> yet. Is there some other configuration file that contains these small
>>> words? Is there any way to force Solr to pay attention to them and
>>> not drop them from the phrase

Re: Of, To, and Other Small Words

2014-07-15 Thread Jack Krupansky

Oops... forgot the link to the stop filter factory Javadoc:
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/core/StopFilterFactory.html

-- Jack Krupansky

-Original Message- 
From: Jack Krupansky

Sent: Tuesday, July 15, 2014 7:42 AM
To: solr-user@lucene.apache.org
Subject: Re: Of, To, and Other Small Words

Yeah, this is another one of those places where the behavior of Solr is
defined but way down in the Lucene Javadoc, where no Solr user should ever
have to go!

It's also the kind of detail documented in my Solr Deep Dive e-book:
http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive-early-access-release-7/ebook/product-21203548.html

-- Jack Krupansky

-Original Message- 
From: Alexandre Rafalovitch

Sent: Tuesday, July 15, 2014 4:36 AM
To: solr-user
Subject: Re: Of, To, and Other Small Words

https://github.com/apache/lucene-solr/blob/lucene_solr_4_9_0/lucene/analysis/common/src/java/org/apache/lucene/analysis/core/StopAnalyzer.java#L51

If you don't set the attribute in XML file, it falls back to the
default definitions.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On Tue, Jul 15, 2014 at 3:16 PM, Aman Tandon 
wrote:

Hi jack,


it will use the internal *Lucene hardwired list* of stop words


I am unaware of this, could you please provide the more information about
this.


With Regards
Aman Tandon


On Tue, Jul 15, 2014 at 7:21 AM, Alexandre Rafalovitch 


wrote:


You could try experimenting with CommonGramsFilterFactory and
CommonGramsQueryFilter (slightly different). There is actually a lot
of cool analyzers bundled with Solr. You can find full list on my site
at: http://www.solr-start.com/info/analyzers

Regards,
   Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On Tue, Jul 15, 2014 at 8:42 AM, Teague James 
wrote:
> Alex,
>
> Thanks! Great suggestion. I figured out that it was the
EdgeNGramFilterFactory. Taking that out of the mix did it.
>
> -Teague
>
> -Original Message-
> From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
> Sent: Monday, July 14, 2014 9:14 PM
> To: solr-user
> Subject: Re: Of, To, and Other Small Words
>
> Have you tried the Admin UI's Analyze screen. Because it will show you
what happens to the text as it progresses through the tokenizers and
filters. No need to reindex.
>
> Regards,
>Alex.
> Personal: http://www.outerthoughts.com/ and @arafalov Solr resources:
http://www.solr-start.com/ and @solrstart Solr popularizers community:
https://www.linkedin.com/groups?gid=6713853
>
>
> On Tue, Jul 15, 2014 at 8:10 AM, Teague James 
> 

wrote:
>> Hi Anshum,
>>
>> Thanks for replying and suggesting this, but the field type I am using
(a modified text_general) in my schema has the file set to 
'stopwords.txt'.

>>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> Just to be double sure I cleared the list in stopwords_en.txt,
restarted Solr, re-indexed, and searched with still zero results. Any 
other

suggestions on where I might be able to control this behavior?
>>
>> -Teague
>>
>>
>> -Original Message-
>> From: Anshum Gupta [mailto:ans...@anshumgupta.net]
>> Sent: Monday, July 14, 2014 4:04 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Of, To, and Other Small Words
>>
>> Hi Teague,
>>
>> The StopFilterFactory (which I think you're using) by default uses
lang/stopwords_en.txt (which wouldn't be empty if you check).
>> What you're looking at is the stopword.txt. You could either empty 
>> that

file out or change the field type for your field.
>>
>>
>> On Mon, Jul 14, 2014 at 12:53 PM, Teague James <
teag...@insystechinc.com> wrote:
>>> Hello all,
>>>
>>> I am working with Solr 4.9.0 and am searching for phrases that
>>> contain words like "of" or "to" that Solr seems to be ignoring at
index time.
>>> Here's what I tried:
>>>
>>> curl http://localhost/solr/update?commit=true -H "Content-Type:
text/xml"
>>> --data-binary '100>> name="content">blah blah blah knowledge of science blah blah
>>> blah'
>>>
>>> Then, using a broswer:
>>>
>>> 
>>> i
>>> d:100
>>>
>>> I get zero hits. Search for "knowledge" or "science" and I'll get 
>>> hits.

>>> "knowledge of" or "of science" and I get zero hits. I don't want to
>>> use proximity if I can avoid it, as this may introduce too

Re: SOLR-6143 Bad facet counts from CollapsingQParserPlugin

2014-07-15 Thread Joel Bernstein
They should be same as long as the same group heads are selected with both
queries. The CollapsingQParserPugin simply collapses the result set and
then forwards to lower collectors, so the DocSet created should always be
for the collapsed set.





Joel Bernstein
Search Engineer at Heliosearch


On Sun, Jul 13, 2014 at 11:17 PM, Umesh Prasad  wrote:

> Hi Joel,
>  Actually I also have seen this. The counts given by groups.truncate
> and collapsingQParserPlugin differ.. We have a golden query framework for
> our product APIs and there we have seen differences in facet count given.
> One request uses groups.truncate and another collapsingQParser plugin and
> we have seen counts differ (By a small margin)
> I haven't been able to isolate the issue to a unit test level, so I
> haven't raised a bug.
>
>
>
>
> On 12 July 2014 08:57, Joel Bernstein  wrote:
>
> > The CollapsingQParserPlugin currently supports facet counts that match
> > "group.truncate". This works great for some use cases.
> >
> > There are use cases though where "group.facets" counts are preferred. No
> > timetable yet on adding this feature for the CollapsingQParserPlugin.
> >
> > Joel Bernstein
> > Search Engineer at Heliosearch
> >
> >
> > On Thu, Jul 10, 2014 at 7:20 PM, shamik  wrote:
> >
> > > Are there any plans to release this feature anytime soon ? I think this
> > is
> > > pretty important as a lot of search use case are dependent on the facet
> > > count being returned by the search result. This issue renders renders
> the
> > > CollapsingQParserPlugin pretty much unusable. I'm now reverting back to
> > the
> > > old group query (painfully slow) since I can't use the facet count
> > anymore.
> > >
> > >
> > >
> > > --
> > > View this message in context:
> > >
> >
> http://lucene.472066.n3.nabble.com/RE-SOLR-6143-Bad-facet-counts-from-CollapsingQParserPlugin-tp4140455p4146645.html
> > > Sent from the Solr - User mailing list archive at Nabble.com.
> > >
> >
>
>
>
> --
> ---
> Thanks & Regards
> Umesh Prasad
>


Re: Slow inserts when using Solr Cloud

2014-07-15 Thread Shalin Shekhar Mangar
You can use CloudSolrServer (if you're using Java) which will route
documents correctly to the leader of the appropriate shard.


On Tue, Jul 15, 2014 at 3:04 PM, ian  wrote:

> Hi Mark
>
> Thanks for replying to my post.  Would you know whether my findings are
> consistent with what other people see when using SolrCloud?
>
> One thing I want to investigate is whether I can route my updates to the
> correct shard in the first place, by having my client using the same
> hashing
> logic as Solr, and working out in advance which shard my inserts should be
> sent to.  Do you know whether that's an approach that others have used?
>
> Thanks again
> Ian
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Slow-inserts-when-using-Solr-Cloud-tp4146087p4147183.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>



-- 
Regards,
Shalin Shekhar Mangar.


Boost using date and a field value

2014-07-15 Thread Hakim Benoudjit
Hi,
I want to boost recent (*today's*) documents having a certain *field value*.
The two fields to be bosted are respectively: '*date*' & '*site*'.
But I dont want to penalize *recent *documents not satisfying the field
value ('*site*'), in favor of *older* documents satisfying this field value
('*site*').

- I've boosted documents having this field value ('*site*'), using *dismax
boost query*.
- And I've found on solr doc how to boost *recent *docs:
https://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_boost_the_score_of_newer_documents

- But I cant combine these two boosts.


-- 
Hakim Benoudjit.


Better Apache Camel Solr Support

2014-07-15 Thread Doug Turnbull
Hello everyone. I'm emailing the group because we've encountered a number
of people at conferences (ApacheCon, LuceneRev, etc) that use Apache Camel
as their ingest pipeline for Solr. I've heard a number of folks discuss
their frustration with lack of SolrCloud support amongst other things in
Camel. I wanted to pass along the work we've been doing to improve the
Apache Camel Solr component. We hope to keep iterating on the component,
providing additional features.

You can read more about our work here:
http://www.opensourceconnections.com/blog/2014/07/15/improving-the-camel-solr-component/

Let me or Scott Stults(sstu...@o19s.com) know if you have any
questions/feedback/pull requests/bugs/etc.

Thanks!
-- 
Doug Turnbull
Search & Big Data Architect
OpenSource Connections 


Re: Of, To, and Other Small Words

2014-07-15 Thread Walter Underwood
If you want to keep stopwords, take the stopword filter out of your analysis 
chain.

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/


On Jul 15, 2014, at 1:36 AM, Alexandre Rafalovitch  wrote:

> https://github.com/apache/lucene-solr/blob/lucene_solr_4_9_0/lucene/analysis/common/src/java/org/apache/lucene/analysis/core/StopAnalyzer.java#L51
> 
> If you don't set the attribute in XML file, it falls back to the
> default definitions.
> Personal: http://www.outerthoughts.com/ and @arafalov
> Solr resources: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
> 
> 
> On Tue, Jul 15, 2014 at 3:16 PM, Aman Tandon  wrote:
>> Hi jack,
>> 
>> 
>> it will use the internal *Lucene hardwired list* of stop words
>> 
>> 
>> I am unaware of this, could you please provide the more information about
>> this.
>> 
>> 
>> With Regards
>> Aman Tandon
>> 
>> 
>> On Tue, Jul 15, 2014 at 7:21 AM, Alexandre Rafalovitch 
>> wrote:
>> 
>>> You could try experimenting with CommonGramsFilterFactory and
>>> CommonGramsQueryFilter (slightly different). There is actually a lot
>>> of cool analyzers bundled with Solr. You can find full list on my site
>>> at: http://www.solr-start.com/info/analyzers
>>> 
>>> Regards,
>>>   Alex.
>>> Personal: http://www.outerthoughts.com/ and @arafalov
>>> Solr resources: http://www.solr-start.com/ and @solrstart
>>> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>>> 
>>> 
>>> On Tue, Jul 15, 2014 at 8:42 AM, Teague James 
>>> wrote:
 Alex,
 
 Thanks! Great suggestion. I figured out that it was the
>>> EdgeNGramFilterFactory. Taking that out of the mix did it.
 
 -Teague
 
 -Original Message-
 From: Alexandre Rafalovitch [mailto:arafa...@gmail.com]
 Sent: Monday, July 14, 2014 9:14 PM
 To: solr-user
 Subject: Re: Of, To, and Other Small Words
 
 Have you tried the Admin UI's Analyze screen. Because it will show you
>>> what happens to the text as it progresses through the tokenizers and
>>> filters. No need to reindex.
 
 Regards,
   Alex.
 Personal: http://www.outerthoughts.com/ and @arafalov Solr resources:
>>> http://www.solr-start.com/ and @solrstart Solr popularizers community:
>>> https://www.linkedin.com/groups?gid=6713853
 
 
 On Tue, Jul 15, 2014 at 8:10 AM, Teague James 
>>> wrote:
> Hi Anshum,
> 
> Thanks for replying and suggesting this, but the field type I am using
>>> (a modified text_general) in my schema has the file set to 'stopwords.txt'.
> 
>>> positionIncrementGap="100">
>
>>> class="solr.StandardTokenizerFactory"/>
>>> ignoreCase="true" words="stopwords.txt" />
>
>
>
>>> minGramSize="3" maxGramSize="10" />
>
>
>
>
>>> class="solr.StandardTokenizerFactory"/>
>>> ignoreCase="true" words="stopwords.txt" />
>>> synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
>
>
>
>
>
> 
> Just to be double sure I cleared the list in stopwords_en.txt,
>>> restarted Solr, re-indexed, and searched with still zero results. Any other
>>> suggestions on where I might be able to control this behavior?
> 
> -Teague
> 
> 
> -Original Message-
> From: Anshum Gupta [mailto:ans...@anshumgupta.net]
> Sent: Monday, July 14, 2014 4:04 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Of, To, and Other Small Words
> 
> Hi Teague,
> 
> The StopFilterFactory (which I think you're using) by default uses
>>> lang/stopwords_en.txt (which wouldn't be empty if you check).
> What you're looking at is the stopword.txt. You could either empty that
>>> file out or change the field type for your field.
> 
> 
> On Mon, Jul 14, 2014 at 12:53 PM, Teague James <
>>> teag...@insystechinc.com> wrote:
>> Hello all,
>> 
>> I am working with Solr 4.9.0 and am searching for phrases that
>> contain words like "of" or "to" that Solr seems to be ignoring at
>>> index time.
>> Here's what I tried:
>> 
>> curl http://localhost/solr/update?commit=true -H "Content-Type:
>>> text/xml"
>> --data-binary '100> name="content">blah blah blah knowledge of science blah blah
>> blah'
>> 
>> Then, using a broswer:
>> 
>> 
>> i
>> d:100
>> 
>> I get zero hits. Search for "knowledge" or "science" and I'll get hits.
>> "knowledge of" or "of science" and I get zero hits. I don't want

HASH range calculation

2014-07-15 Thread Shubham Srivastava - Technology

The contents of this email, including the attachments, are PRIVILEGED AND 
CONFIDENTIAL to the intended recipient at the email address to which it has 
been addressed. If you receive it in error, please notify the sender 
immediately by return email and then permanently delete it from your system. 
The unauthorized use, distribution, copying or alteration of this email, 
including the attachments, is strictly forbidden. Please note that neither 
HotelTravel nor the sender accepts any responsibility for viruses and it is 
your responsibility to scan the email and attachments (if any). No contracts 
may be concluded on behalf of HotelTravel by means of email communications.


Re: Usage of enablePositionIncrements in Stop filter

2014-07-15 Thread prashantc88
Guys, I found the explanation that i was looking for online.
http://docs.lucidworks.com/display/lweug/Suppressing+Stop+Word+Indexing




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Usage-of-enablePositionIncrements-in-Stop-filter-tp4147321p4147324.html
Sent from the Solr - User mailing list archive at Nabble.com.


Usage of enablePositionIncrements in Stop filter

2014-07-15 Thread prashantc88
Hi,

Could anyone please explain me the usage of enablePositionIncrements in
StopFilterFactory. I have trying to search in on the forum as well as the
internet, but I cannot understand it. It would be great if anyone could help
me out with an example.

Thanks 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Usage-of-enablePositionIncrements-in-Stop-filter-tp4147321.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Usage of enablePositionIncrements in Stop filter

2014-07-15 Thread Jack Krupansky
It's a bug (file a Jira) that this Lucene (and Solr) feature is not 
documented in the Lucene Javadoc for the stop filter factory.


But, I do have it fully documented, with examples, in my Solr Deep Dive 
e-book:

http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive-early-access-release-7/ebook/product-21203548.html

-- Jack Krupansky

-Original Message- 
From: prashantc88

Sent: Tuesday, July 15, 2014 1:58 PM
To: solr-user@lucene.apache.org
Subject: Re: Usage of enablePositionIncrements in Stop filter

Guys, I found the explanation that i was looking for online.
http://docs.lucidworks.com/display/lweug/Suppressing+Stop+Word+Indexing




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Usage-of-enablePositionIncrements-in-Stop-filter-tp4147321p4147324.html
Sent from the Solr - User mailing list archive at Nabble.com. 



Re: Strategies for effective prefix queries?

2014-07-15 Thread Hayden Muhl
Both fields? There is only one field here: username.


On Mon, Jul 14, 2014 at 6:17 PM, Alexandre Rafalovitch 
wrote:

> Search against both fields (one split, one not split)? Keep original
> and tokenized form? I am doing something similar with class name
> autocompletes here:
>
> https://github.com/arafalov/Solr-Javadoc/blob/master/JavadocIndex/JavadocCollection/conf/schema.xml#L24
>
> Regards,
>Alex.
> Personal: http://www.outerthoughts.com/ and @arafalov
> Solr resources: http://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On Tue, Jul 15, 2014 at 8:04 AM, Hayden Muhl  wrote:
> > I'm working on using Solr for autocompleting usernames. I'm running into
> a
> > problem with the wildcard queries (e.g. username:al*).
> >
> > We are tokenizing usernames so that a username like "solr-user" will be
> > tokenized into "solr" and "user", and will match both "sol" and "use"
> > prefixes. The problem is when we get "solr-u" as a prefix, I'm having to
> > split that up on the client side before I construct a query
> "username:solr*
> > username:u*". I'm basically using a regex as a poor man's tokenizer.
> >
> > Is there a better way to approach this? Is there a way to tell Solr to
> > tokenize a string and use the parts as prefixes?
> >
> > - Hayden
>


Re: Usage of enablePositionIncrements in Stop filter

2014-07-15 Thread prashantc88
Thanks Jack!



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Usage-of-enablePositionIncrements-in-Stop-filter-tp4147321p4147352.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Strategies for effective prefix queries?

2014-07-15 Thread Alexandre Rafalovitch
So copyField it to another and apply alternative processing there. Use
eDismax to search both. No need to store the copied field, just index it.

Regards,
 Alex
On 16/07/2014 2:46 am, "Hayden Muhl"  wrote:

> Both fields? There is only one field here: username.
>
>
> On Mon, Jul 14, 2014 at 6:17 PM, Alexandre Rafalovitch  >
> wrote:
>
> > Search against both fields (one split, one not split)? Keep original
> > and tokenized form? I am doing something similar with class name
> > autocompletes here:
> >
> >
> https://github.com/arafalov/Solr-Javadoc/blob/master/JavadocIndex/JavadocCollection/conf/schema.xml#L24
> >
> > Regards,
> >Alex.
> > Personal: http://www.outerthoughts.com/ and @arafalov
> > Solr resources: http://www.solr-start.com/ and @solrstart
> > Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
> >
> >
> > On Tue, Jul 15, 2014 at 8:04 AM, Hayden Muhl 
> wrote:
> > > I'm working on using Solr for autocompleting usernames. I'm running
> into
> > a
> > > problem with the wildcard queries (e.g. username:al*).
> > >
> > > We are tokenizing usernames so that a username like "solr-user" will be
> > > tokenized into "solr" and "user", and will match both "sol" and "use"
> > > prefixes. The problem is when we get "solr-u" as a prefix, I'm having
> to
> > > split that up on the client side before I construct a query
> > "username:solr*
> > > username:u*". I'm basically using a regex as a poor man's tokenizer.
> > >
> > > Is there a better way to approach this? Is there a way to tell Solr to
> > > tokenize a string and use the parts as prefixes?
> > >
> > > - Hayden
> >
>


questions on Solr WordBreakSolrSpellChecker and WordDelimiterFilterFactory

2014-07-15 Thread jiag
Hello everyone :)

I have a product called "xbox" indexed, and when the user search for
either "x-box" or "x box" i want the "xbox" product to be
returned.  I'm new to Solr, and from reading online, I thought I need
to use WordDelimiterFilterFactory for "x-box" case, and
WordBreakSolrSpellChecker for "x box" case. Is this correct?

(1) In my schema file, this is what I changed:


But I don't see the xbox product returned when the search term is
"x-box", so I must have missed something

(2) I tried to use  WordBreakSolrSpellChecker together with
DirectSolrSpellChecker as shown below, but the WordBreakSolrSpellChecker
never got used:


wc_textSpell


  default
  spellCheck
  solr.DirectSolrSpellChecker
  internal
0.3
2
1
5
3
0.01
0.004

 
wordbreak
solr.WordBreakSolrSpellChecker
spellCheck
true
true
10
  
  

  

SpellCheck
true
default
wordbreak
 true
false
10
true
false


  wc_spellcheck

  

I tried to build the dictionary this way:
http://localhost/solr/coreName/select?spellcheck=true&spellcheck.build=true,
but the response returned is this:


0
0

true
true


build



What's the correct way to build the dictionary?
Even though my requestHandler's name="/spellcheck", i wasn't able to
use
http://localhost/solr/coreName/spellcheck?spellcheck=true&spellcheck.build=true
.. is there something wrong with my definition above?

(3) I also tried to use WordBreakSolrSpellChecker without the
DirectSolrSpellChecker as shown below:


  wc_textSpell

default
solr.WordBreakSolrSpellChecker
spellCheck
true
true
10
  
   

   

SpellCheck
true
default

 true
false
10
true
false


  wc_spellcheck

  

And still unable to see WordBreakSolrSpellChecker being called anywhere.

Would someone kindly help me?

Many thanks,
Jia


problem with replication/solrcloud - getting 'missing required field' during update intermittently (SOLR-6251)

2014-07-15 Thread Nathan Neulinger
Issue was closed in Jira requesting it be discussed here first. Looking for any diagnostic assistance on this issue with 
4.8.0 since it is intermittent and occurs without warning.


Setup is two nodes, with external zk ensemble. Nodes are accessed round-robin 
on EC2 behind an ELB.

Schema has:


...
   omitNorms="true" />

...


Most of the updates are working without issue, but randomly we'll get the above failure, even though searches before and 
after the update clearly indicate that the document had the timestamp field in it. The error occurs when the second node 
does it's distrib operation against the first node.


Diagnostic details are all in the jira issue. Can provide more as needed, but would appreciate any suggestions on what 
to try or to help diagnose this other than just trying to throw thousands of requests at it in round-robin between the 
two instances to see if it's possible to reproduce the issue.


-- Nathan


Nathan Neulinger   nn...@neulinger.org
Neulinger Consulting   (573) 612-1412


Re: External File Field eating memory

2014-07-15 Thread Kamal Kishore Aggarwal
Hi Apporva,

This was my master server replication configuration:

core/conf/solrconfig.xml


> 
> commit
> startup
> ../data/external_eff_views
> 
> 


It is only configuration files that can be replicated. So, when I wrote the
above config. The external files was getting replicated in
core/conf/data/external_eff_views.
But for solr to read the external file, it looks for it into
core/data/external_eff_views
location. So firstly the file was not getting replicated properly.
Therefore, I did not opted the option of replicating the eff file.

And the second thing is that whenever there is a change in configuration
files, the core gets reloaded by itself to reflect the changes. I am not
sure if you can disable this reloading.

Finally, I thought of creating files on slaves in a different way.

Thanks
Kamal


On Tue, Jul 15, 2014 at 11:00 AM, Apoorva Gaurav 
wrote:

> Hey Kamal,
> What all config changes have you done to establish replication of external
> files and how have you disabled role reloading?
>
>
> On Wed, Jul 9, 2014 at 11:30 AM, Kamal Kishore Aggarwal <
> kkroyal@gmail.com> wrote:
>
> > Hi All,
> >
> > It was found that external file, which was getting replicated after every
> > 10 minutes was reloading the core as well. This was increasing the query
> > time.
> >
> > Thanks
> > Kamal Kishore
> >
> >
> >
> > On Thu, Jul 3, 2014 at 12:48 PM, Kamal Kishore Aggarwal <
> > kkroyal@gmail.com> wrote:
> >
> > > With the above replication configuration, the eff file is getting
> > > replicated at core/conf/data/external_eff_views (new dir data is being
> > > created in conf dir) location, but it is not getting replicated at
> > core/data/external_eff_views
> > > on slave.
> > >
> > > Please help.
> > >
> > >
> > > On Thu, Jul 3, 2014 at 12:21 PM, Kamal Kishore Aggarwal <
> > > kkroyal@gmail.com> wrote:
> > >
> > >> Thanks for your guidance Alexandre Rafalovitch.
> > >>
> > >> I am looking into this seriously.
> > >>
> > >> Another question is that I facing error in replication of eff file
> > >>
> > >> This is master replication configuration:
> > >>
> > >> core/conf/solrconfig.xml
> > >>
> > >> 
> > >>> 
> > >>> commit
> > >>> startup
> > >>> ../data/external_eff_views
> > >>> 
> > >>> 
> > >>
> > >>
> > >> The eff file is present at core/data/external_eff_views location.
> > >>
> > >>
> > >> On Thu, Jul 3, 2014 at 11:50 AM, Shalin Shekhar Mangar <
> > >> shalinman...@gmail.com> wrote:
> > >>
> > >>> This might be related:
> > >>>
> > >>> https://issues.apache.org/jira/browse/SOLR-3514
> > >>>
> > >>>
> > >>> On Sat, Jun 28, 2014 at 5:34 PM, Kamal Kishore Aggarwal <
> > >>> kkroyal@gmail.com> wrote:
> > >>>
> > >>> > Hi Team,
> > >>> >
> > >>> > I have recently implemented EFF in solr. There are about 1.5
> > >>> lacs(unsorted)
> > >>> > values in the external file. After this implementation, the server
> > has
> > >>> > become slow. The solr query time has also increased.
> > >>> >
> > >>> > Can anybody confirm me if these issues are because of this
> > >>> implementation.
> > >>> > Is that memory does EFF eats up?
> > >>> >
> > >>> > Regards
> > >>> > Kamal Kishore
> > >>> >
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> Regards,
> > >>> Shalin Shekhar Mangar.
> > >>>
> > >>
> > >>
> > >
> >
>
>
>
> --
> Thanks & Regards,
> Apoorva
>


Re: External File Field eating memory

2014-07-15 Thread Apoorva Gaurav
Thanks Kamal.


On Wed, Jul 16, 2014 at 11:43 AM, Kamal Kishore Aggarwal <
kkroyal@gmail.com> wrote:

> Hi Apporva,
>
> This was my master server replication configuration:
>
> core/conf/solrconfig.xml
>
> 
> > 
> > commit
> > startup
> > ../data/external_eff_views
> > 
> > 
>
>
> It is only configuration files that can be replicated. So, when I wrote the
> above config. The external files was getting replicated in
> core/conf/data/external_eff_views.
> But for solr to read the external file, it looks for it into
> core/data/external_eff_views
> location. So firstly the file was not getting replicated properly.
> Therefore, I did not opted the option of replicating the eff file.
>
> And the second thing is that whenever there is a change in configuration
> files, the core gets reloaded by itself to reflect the changes. I am not
> sure if you can disable this reloading.
>
> Finally, I thought of creating files on slaves in a different way.
>
> Thanks
> Kamal
>
>
> On Tue, Jul 15, 2014 at 11:00 AM, Apoorva Gaurav <
> apoorva.gau...@myntra.com>
> wrote:
>
> > Hey Kamal,
> > What all config changes have you done to establish replication of
> external
> > files and how have you disabled role reloading?
> >
> >
> > On Wed, Jul 9, 2014 at 11:30 AM, Kamal Kishore Aggarwal <
> > kkroyal@gmail.com> wrote:
> >
> > > Hi All,
> > >
> > > It was found that external file, which was getting replicated after
> every
> > > 10 minutes was reloading the core as well. This was increasing the
> query
> > > time.
> > >
> > > Thanks
> > > Kamal Kishore
> > >
> > >
> > >
> > > On Thu, Jul 3, 2014 at 12:48 PM, Kamal Kishore Aggarwal <
> > > kkroyal@gmail.com> wrote:
> > >
> > > > With the above replication configuration, the eff file is getting
> > > > replicated at core/conf/data/external_eff_views (new dir data is
> being
> > > > created in conf dir) location, but it is not getting replicated at
> > > core/data/external_eff_views
> > > > on slave.
> > > >
> > > > Please help.
> > > >
> > > >
> > > > On Thu, Jul 3, 2014 at 12:21 PM, Kamal Kishore Aggarwal <
> > > > kkroyal@gmail.com> wrote:
> > > >
> > > >> Thanks for your guidance Alexandre Rafalovitch.
> > > >>
> > > >> I am looking into this seriously.
> > > >>
> > > >> Another question is that I facing error in replication of eff file
> > > >>
> > > >> This is master replication configuration:
> > > >>
> > > >> core/conf/solrconfig.xml
> > > >>
> > > >>  >
> > > >>> 
> > > >>> commit
> > > >>> startup
> > > >>> ../data/external_eff_views
> > > >>> 
> > > >>> 
> > > >>
> > > >>
> > > >> The eff file is present at core/data/external_eff_views location.
> > > >>
> > > >>
> > > >> On Thu, Jul 3, 2014 at 11:50 AM, Shalin Shekhar Mangar <
> > > >> shalinman...@gmail.com> wrote:
> > > >>
> > > >>> This might be related:
> > > >>>
> > > >>> https://issues.apache.org/jira/browse/SOLR-3514
> > > >>>
> > > >>>
> > > >>> On Sat, Jun 28, 2014 at 5:34 PM, Kamal Kishore Aggarwal <
> > > >>> kkroyal@gmail.com> wrote:
> > > >>>
> > > >>> > Hi Team,
> > > >>> >
> > > >>> > I have recently implemented EFF in solr. There are about 1.5
> > > >>> lacs(unsorted)
> > > >>> > values in the external file. After this implementation, the
> server
> > > has
> > > >>> > become slow. The solr query time has also increased.
> > > >>> >
> > > >>> > Can anybody confirm me if these issues are because of this
> > > >>> implementation.
> > > >>> > Is that memory does EFF eats up?
> > > >>> >
> > > >>> > Regards
> > > >>> > Kamal Kishore
> > > >>> >
> > > >>>
> > > >>>
> > > >>>
> > > >>> --
> > > >>> Regards,
> > > >>> Shalin Shekhar Mangar.
> > > >>>
> > > >>
> > > >>
> > > >
> > >
> >
> >
> >
> > --
> > Thanks & Regards,
> > Apoorva
> >
>



-- 
Thanks & Regards,
Apoorva