Re: simple tokenizer question

2013-12-08 Thread Vulcanoid Developer
Thanks for your email.

Great, I will look at the WordDelimiterFactory. Just to make clear, I DON'T
want any other tokenizing on digits, specialchars, punctuations etc done
other than word delimiting on whitespace.

All I want for my first version is NO removal of punctuations/special
characters at indexing time and during search time i.e., input as-is and
search as-is (like a simple sql db?) . I was assuming this would be a
trivial case with SOLR and not sure what I am missing here.

thanks
Vulcanoid



On Sun, Dec 8, 2013 at 4:33 AM, Upayavira  wrote:

> Have you tried a WhitespaceTokenizerFactory followed by the
> WordDelimiterFilterFactory? The latter is perhaps more configurable at
> what it does. Alternatively, you could use a RegexFilterFactory to
> remove extraneous punctuation that wasn't removed by the Whitespace
> Tokenizer.
>
> Upayavira
>
> On Sat, Dec 7, 2013, at 06:15 PM, Vulcanoid Developer wrote:
> > Hi,
> >
> > I am new to solr and I guess this is a basic tokenizer question so please
> > bear with me.
> >
> > I am trying to use SOLR to index a few (Indian) legal judgments in text
> > form and search against them. One of the key points with these documents
> > is
> > that the sections/provisions of law usually have punctuation/special
> > characters in them. For example search queries will TYPICALLY be section
> > 12AA, section 80-IA, section 9(1)(vii) and the text of the judgments
> > themselves will contain these sort of text with section references all
> > over
> > the place.
> >
> > Now, using a default schema setup with standardtokenizer, which seems to
> > delimit on whitespace AND punctuations, I get really bad results because
> > it
> > looks like 12AA is split and results such having 12 and AA in them turn
> > up.
> >  It becomes worse with 9(1)(vii) with results containing 9 and 1 etc
> >  being
> > turned up.
> >
> > What is the best solution here? I really just want to index the document
> > as-is and also to do whitespace tokenizing on the search and nothing
> > more.
> >
> > So in other words:
> > a) I would like the text document to be indexed as-is with say 12AA and
> > 9(1)(vii) in the document stored as it is mentioned.
> > b) I would like to be able to search for 12AA and for 9(1)(vii) and get
> > proper full matches on them without any splitting up/munging etc.
> >
> > Any suggestions are appreciated.  Thank you for your time.
> >
> > Thanks
> > Vulcanoid
>


Re: Difference between textfield and strfield

2013-12-08 Thread manju16832003
I don't understand. Use the field type *Ahmet* recommended. Who is Ahmet?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Difference-between-textfield-and-strfield-tp3986916p4105570.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Bad fieldNorm when using morphologic synonyms

2013-12-08 Thread Manuel Le Normand
Robert, you last reply is not accurate.
It's true that the field norms and termVectors are independent. But this
issue of higher norms for this case is expected with well assigned
positions. The LengthNorm is assigned as FieldInvertState.length which is
the count of incrementToken and not num of positions! It is the case for
wordDelimiterFilter or ReversedWildcardFilter which do change the norm when
expanding a term.


RE: Faceting within groups

2013-12-08 Thread Cool Techi
Any help here?

> From: cooltec...@outlook.com
> To: solr-user@lucene.apache.org
> Subject: Faceting within groups
> Date: Sat, 7 Dec 2013 14:00:20 +0530
> 
> Hi,
> I am not sure if faceting with groups is supported, the documents do seem to 
> suggest it works, but cant seem to get the intended results.
> ("Amazon Cloud" OR ("IBM Cloud") name="group.field">sourceIdsentiment name="group">truetrue
> Also, if it work's does solr cloud support it.
> Regards,Ayush   
  

Re: Bad fieldNorm when using morphologic synonyms

2013-12-08 Thread Robert Muir
its accurate, you are wrong.

please, look at setDiscountOverlaps in your similarity. This is really
easy to understand.

On Sun, Dec 8, 2013 at 7:23 AM, Manuel Le Normand
 wrote:
> Robert, you last reply is not accurate.
> It's true that the field norms and termVectors are independent. But this
> issue of higher norms for this case is expected with well assigned
> positions. The LengthNorm is assigned as FieldInvertState.length which is
> the count of incrementToken and not num of positions! It is the case for
> wordDelimiterFilter or ReversedWildcardFilter which do change the norm when
> expanding a term.


Re: simple tokenizer question

2013-12-08 Thread Upayavira
If you want to just split on whitespace, then the WhitespaceTokenizer
will do the job.

However, this will mean that these two tokens aren't the same, and won't
match each other:

cat
cat.

A simple regex filter could handle those cases, remove a comma or dot
when at the end of a word. Although there are other similar situations
(quotes, colons, etc) that you may want to handle eventually.

Upayavira

On Sun, Dec 8, 2013, at 11:51 AM, Vulcanoid Developer wrote:
> Thanks for your email.
> 
> Great, I will look at the WordDelimiterFactory. Just to make clear, I
> DON'T
> want any other tokenizing on digits, specialchars, punctuations etc done
> other than word delimiting on whitespace.
> 
> All I want for my first version is NO removal of punctuations/special
> characters at indexing time and during search time i.e., input as-is and
> search as-is (like a simple sql db?) . I was assuming this would be a
> trivial case with SOLR and not sure what I am missing here.
> 
> thanks
> Vulcanoid
> 
> 
> 
> On Sun, Dec 8, 2013 at 4:33 AM, Upayavira  wrote:
> 
> > Have you tried a WhitespaceTokenizerFactory followed by the
> > WordDelimiterFilterFactory? The latter is perhaps more configurable at
> > what it does. Alternatively, you could use a RegexFilterFactory to
> > remove extraneous punctuation that wasn't removed by the Whitespace
> > Tokenizer.
> >
> > Upayavira
> >
> > On Sat, Dec 7, 2013, at 06:15 PM, Vulcanoid Developer wrote:
> > > Hi,
> > >
> > > I am new to solr and I guess this is a basic tokenizer question so please
> > > bear with me.
> > >
> > > I am trying to use SOLR to index a few (Indian) legal judgments in text
> > > form and search against them. One of the key points with these documents
> > > is
> > > that the sections/provisions of law usually have punctuation/special
> > > characters in them. For example search queries will TYPICALLY be section
> > > 12AA, section 80-IA, section 9(1)(vii) and the text of the judgments
> > > themselves will contain these sort of text with section references all
> > > over
> > > the place.
> > >
> > > Now, using a default schema setup with standardtokenizer, which seems to
> > > delimit on whitespace AND punctuations, I get really bad results because
> > > it
> > > looks like 12AA is split and results such having 12 and AA in them turn
> > > up.
> > >  It becomes worse with 9(1)(vii) with results containing 9 and 1 etc
> > >  being
> > > turned up.
> > >
> > > What is the best solution here? I really just want to index the document
> > > as-is and also to do whitespace tokenizing on the search and nothing
> > > more.
> > >
> > > So in other words:
> > > a) I would like the text document to be indexed as-is with say 12AA and
> > > 9(1)(vii) in the document stored as it is mentioned.
> > > b) I would like to be able to search for 12AA and for 9(1)(vii) and get
> > > proper full matches on them without any splitting up/munging etc.
> > >
> > > Any suggestions are appreciated.  Thank you for your time.
> > >
> > > Thanks
> > > Vulcanoid
> >


Re: simple tokenizer question

2013-12-08 Thread Josh Lincoln
Have you tried adding autoGeneratePhraseQueries=true to the fieldType
without changing the index analysis behavior.

This works at querytime only, and will convert 12-34 to "12 34", as if the
user entered the query as a phrase. This gives the expected behavior as
long as the tokenization is the same for analysis and query.
This'll work for the 80-IA structure, and I think it'll also work for
the 9(1)(vii)
example (converting it to "9 1 vii"), but I haven't tested it. Also, I
would think the 12AA example should already be working as you expect,
unless maybe you're already using the worddelimiterfilterfactory. When I
test the standardTokenizer on 12AA it preserves the string, resulting in
just one token of 12aa.

autoGeneratePhraseQueries is at least worth a quick try - it doesn't
require reindexing.

Two things to note
1) don't use autoGeneratePhraseQueries if you have CJK languages...probably
applies to any language that's not whitespace delimited. You mentioned
Indian, I presume Hindi, which I don't think will be an issue
2) In very rare cases you may have a few odd results if the
non-alphanumeric characters differ but generate the same phrase query. E.g.
9(1)(vii) would produce the same phrase as 9-1(vii), but this doesn't seem
worth considering until you know it's a problem.


On Sun, Dec 8, 2013 at 10:29 AM, Upayavira  wrote:

> If you want to just split on whitespace, then the WhitespaceTokenizer
> will do the job.
>
> However, this will mean that these two tokens aren't the same, and won't
> match each other:
>
> cat
> cat.
>
> A simple regex filter could handle those cases, remove a comma or dot
> when at the end of a word. Although there are other similar situations
> (quotes, colons, etc) that you may want to handle eventually.
>
> Upayavira
>
> On Sun, Dec 8, 2013, at 11:51 AM, Vulcanoid Developer wrote:
> > Thanks for your email.
> >
> > Great, I will look at the WordDelimiterFactory. Just to make clear, I
> > DON'T
> > want any other tokenizing on digits, specialchars, punctuations etc done
> > other than word delimiting on whitespace.
> >
> > All I want for my first version is NO removal of punctuations/special
> > characters at indexing time and during search time i.e., input as-is and
> > search as-is (like a simple sql db?) . I was assuming this would be a
> > trivial case with SOLR and not sure what I am missing here.
> >
> > thanks
> > Vulcanoid
> >
> >
> >
> > On Sun, Dec 8, 2013 at 4:33 AM, Upayavira  wrote:
> >
> > > Have you tried a WhitespaceTokenizerFactory followed by the
> > > WordDelimiterFilterFactory? The latter is perhaps more configurable at
> > > what it does. Alternatively, you could use a RegexFilterFactory to
> > > remove extraneous punctuation that wasn't removed by the Whitespace
> > > Tokenizer.
> > >
> > > Upayavira
> > >
> > > On Sat, Dec 7, 2013, at 06:15 PM, Vulcanoid Developer wrote:
> > > > Hi,
> > > >
> > > > I am new to solr and I guess this is a basic tokenizer question so
> please
> > > > bear with me.
> > > >
> > > > I am trying to use SOLR to index a few (Indian) legal judgments in
> text
> > > > form and search against them. One of the key points with these
> documents
> > > > is
> > > > that the sections/provisions of law usually have punctuation/special
> > > > characters in them. For example search queries will TYPICALLY be
> section
> > > > 12AA, section 80-IA, section 9(1)(vii) and the text of the judgments
> > > > themselves will contain these sort of text with section references
> all
> > > > over
> > > > the place.
> > > >
> > > > Now, using a default schema setup with standardtokenizer, which
> seems to
> > > > delimit on whitespace AND punctuations, I get really bad results
> because
> > > > it
> > > > looks like 12AA is split and results such having 12 and AA in them
> turn
> > > > up.
> > > >  It becomes worse with 9(1)(vii) with results containing 9 and 1 etc
> > > >  being
> > > > turned up.
> > > >
> > > > What is the best solution here? I really just want to index the
> document
> > > > as-is and also to do whitespace tokenizing on the search and nothing
> > > > more.
> > > >
> > > > So in other words:
> > > > a) I would like the text document to be indexed as-is with say 12AA
> and
> > > > 9(1)(vii) in the document stored as it is mentioned.
> > > > b) I would like to be able to search for 12AA and for 9(1)(vii) and
> get
> > > > proper full matches on them without any splitting up/munging etc.
> > > >
> > > > Any suggestions are appreciated.  Thank you for your time.
> > > >
> > > > Thanks
> > > > Vulcanoid
> > >
>


Re: Configurable collectors for custom ranking

2013-12-08 Thread Joel Bernstein
Hi Peter,

I've been meaning to revisit configurable ranking collectors, but I haven't
yet had a chance. It's on the shortlist of things I'd like to tackle
though.



On Fri, Dec 6, 2013 at 4:17 PM, Peter Keegan  wrote:

> I looked at SOLR-4465 and SOLR-5045, where it appears that there is a goal
> to be able to do custom sorting and ranking in a PostFilter. So far, it
> looks like only custom aggregation can be implemented in PostFilter (5045).
> Custom sorting/ranking can be done in a pluggable collector (4465), but
> this patch is no longer in dev.
>
> Is there any other dev. being done on adding custom sorting (after
> collection) via a plugin?
>
> Thanks,
> Peter
>



-- 
Joel Bernstein
Search Engineer at Heliosearch


Fwd: solr.xml

2013-12-08 Thread William Bell
Any thoughts? Why are we getting duplicate items in solr.xml ?

-- Forwarded message --
From: William Bell 
Date: Sat, Dec 7, 2013 at 1:48 PM
Subject: solr.xml
To: "solr-user@lucene.apache.org" 


We are having issues with SWAP CoreAdmin in 4.5.1 and 4.6.

Using legacy solr.xml we issue a SWAP, and we want it persistent. It has
bee running flawless since 4.5. Now it creates duplicate lines in solr.xml.

Even the example multi core schema in 4.5.1 doesn't work with
persistent="true" - it creates duplicate lines in solr.xml.

 


































-- 
Bill Bell
billnb...@gmail.com
cell 720-256-8076



-- 
Bill Bell
billnb...@gmail.com
cell 720-256-8076


Re: Faceting Query in Solr

2013-12-08 Thread kumar
It's working fine.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Faceting-Query-in-Solr-tp4104881p4105612.html
Sent from the Solr - User mailing list archive at Nabble.com.


SOLR 4 - Query Issue in Common Grams with Surround Query Parser

2013-12-08 Thread Salman Akram
All,

I posted this sub-issue with another issue few days back but maybe it was
not obvious so posting it on a separate thread.

We recently migrated to SOLR 4.6. We use Common Grams but queries with
words in the CG list have slowed down. On debugging we found that for CG
words the parser is adding individual tokens of those words in the query
too which ends up slowing it. Below is an example:

Query = "only be"

Here is what debug shows. I have highlighted the red part which is
different in both versions i.e. SOLR 4.6 is making it a multiphrasequery
and adding individual tokens too. Can someone help?

SOLR 4.6 (takes 20 secs)
{!surround}
{!surround}
MultiPhraseQuery(Contents:"(only only_be) be")
Contents:"(only only_be) be"

SOLR 1.4.1 (takes 1 sec)
{!surround}
{!surround}
Contents:only_be
Contents:only_be--


Regards,

Salman Akram


Re: Prioritize search returns by URL path?

2013-12-08 Thread manju16832003
Could it be achieved using multiple request handlers?

Example:

http://localhost:8983/solr/my/wiki
http://localhost:8983/solr/my/blog
http://localhost:8983/solr/my/forum

As we could configure config for each request handler to specify the query.
It would be great if Solr supports to query those three request handlers
together and combine the result set?





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Prioritize-search-returns-by-URL-path-tp4105023p4105622.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Null pointer exception in spell checker at addchecker method

2013-12-08 Thread Areek Zillur
>From the solrConfig provided it seems like you have only two named
spellcheckers defined (direct & wordbreak), but in your '/spell'
requestHandler you are setting three spellcheckers (direct, default &
wordbreak). As you do not have an unnamed spellchecker, there is no
spellchecker defined with the name 'default'.
Hence its erroring out when its trying to get config for the non-existent
'default' spellchecker. I think it should work if you remove "default" from your '/spell'
requestHandler?

Hope that helps,

Areek


On Fri, Dec 6, 2013 at 11:33 PM, sweety  wrote:

> Im trying to use spell check component.
> My *schema* is:(i have included only fields necessary for spell check not
> the entire schema)
> 
>
>  multiValued="false"/>
>  multiValued="false"/>
>  multiValued="false"/>
>  multiValued="true"/>
> 
>  multiValued="false"/>
> 
>  multiValued="true"/>
>
>  multiValued="true" />
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  splitOnCaseChange="1"/>
> 
> 
> 
> 
> 
> 
> 
> 
>
> My *solrconfig* is:
>
> 
> text
> 
> direct
> contents
> solr.DirectSolrSpellChecker
> internal
> 0.8
> 1
> 1
> 5
> 3
> 0.01
> 
> 
>
> 
>   
>wordbreak
>solr.WordBreakSolrSpellChecker
>contents
>true
>true
>10
>  
> 
>
> 
> 
> true
> direct
> default
> wordbreak
> on
> true
> 5
> true
> true
> 
> 
> spellcheck
> 
> 
>
> I get this *error*:
> java.lang.NullPointerException at
>
> org.apache.solr.spelling.*ConjunctionSolrSpellChecker.addChecker*(ConjunctionSolrSpellChecker.java:58)
> at
>
> org.apache.solr.handler.component.SpellCheckComponent.getSpellChecker(SpellCheckComponent.java:475)
> at
>
> org.apache.solr.handler.component.SpellCheckComponent.prepare(SpellCheckComponent.java:106)
> at
>
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:187)
> at
>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at
>
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1797) at
>
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:637)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
> at
>
> I know that the error might be in addchecker method,i read this method but
> the coding of this method is such that, for all the null values, default
> values are added.
> (eg: if (queryAnalyzer == null)
>  queryAnalyzer = checker.getQueryAnalyzer(); )
> Now so i feel that the Null checker value is sent when
> /checkers.add(checker);/  is executed.
>
> If i am right tell me how to resolve this,else what has gone wrong.
> Thanks in advance.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Null-pointer-exception-in-spell-checker-at-addchecker-method-tp4105489.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Resolve an issuse with SOLR

2013-12-08 Thread Munusamy, Kannan

Hi,

I have used the +add core option in the admin UI. But I can’t able to add a 
core. After then, it showed the “hTTP Status 500 - {msg=SolrCore 'new_core' is 
not available due to init failure: Path must not end with /” …  Once I 
restarted the solr service, now I am getting this error in the UI –

“Unable to load environment info from 
/solr/collection1_shard1_replica1/admin/system?wt=json.
This interface requires that you activate the admin request handlers in all 
SolrCores by adding the following configuration to your solrconfig.xml:”

PFA error image.

Please provide suggestions and help us to resolve the issue.

Thanks & Regards,


   Kannan Munusamy |  ♠ Capgemini India  | Bangalore
É  Off: 080 66567000 Extn: 8068605  I  Cell: + 91 9952312352
[cid:image001.gif@01CEF4DC.849569D0]•  
kannan.munus...@capgemini.com I 
www.in.capgemini.com
 People matter, results count.

   P Print only if absolutely necessary | 7 Switch off as you go |qRecycle 
always

<>This message contains information that may be privileged or confidential and is 
the property of the Capgemini Group. It is intended only for the person to whom 
it is addressed. If you are not the intended recipient, you are not authorized 
to read, print, retain, copy, disseminate, distribute, or use this message or 
any part thereof. If you receive this message in error, please notify the 
sender immediately and delete all copies of this message.

Re: Null pointer exception in spell checker at addchecker method

2013-12-08 Thread ??????
how can i unsubscribe




-- Original --
From:  "Areek Zillur";;
Date:  Mon, Dec 9, 2013 02:26 PM
To:  "solr-user"; 

Subject:  Re: Null pointer exception in spell checker at addchecker method



From the solrConfig provided it seems like you have only two named
spellcheckers defined (direct & wordbreak), but in your '/spell'
requestHandler you are setting three spellcheckers (direct, default &
wordbreak). As you do not have an unnamed spellchecker, there is no
spellchecker defined with the name 'default'.
Hence its erroring out when its trying to get config for the non-existent
'default' spellchecker. I think it should work if you remove "default" from your '/spell'
requestHandler?

Hope that helps,

Areek


On Fri, Dec 6, 2013 at 11:33 PM, sweety  wrote:

> Im trying to use spell check component.
> My *schema* is:(i have included only fields necessary for spell check not
> the entire schema)
> 
>
>  multiValued="false"/>
>  multiValued="false"/>
>  multiValued="false"/>
>  multiValued="true"/>
> 
>  multiValued="false"/>
> 
>  multiValued="true"/>
>
>  multiValued="true" />
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  splitOnCaseChange="1"/>
> 
> 
> 
> 
> 
> 
> 
> 
>
> My *solrconfig* is:
>
> 
> text
> 
> direct
> contents
> solr.DirectSolrSpellChecker
> internal
> 0.8
> 1
> 1
> 5
> 3
> 0.01
> 
> 
>
> 
>   
>wordbreak
>solr.WordBreakSolrSpellChecker
>contents
>true
>true
>10
>  
> 
>
> 
> 
> true
> direct
> default
> wordbreak
> on
> true
> 5
> true
> true
> 
> 
> spellcheck
> 
> 
>
> I get this *error*:
> java.lang.NullPointerException at
>
> org.apache.solr.spelling.*ConjunctionSolrSpellChecker.addChecker*(ConjunctionSolrSpellChecker.java:58)
> at
>
> org.apache.solr.handler.component.SpellCheckComponent.getSpellChecker(SpellCheckComponent.java:475)
> at
>
> org.apache.solr.handler.component.SpellCheckComponent.prepare(SpellCheckComponent.java:106)
> at
>
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:187)
> at
>
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at
>
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1797) at
>
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:637)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:343)
> at
>
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
> at
>
> I know that the error might be in addchecker method,i read this method but
> the coding of this method is such that, for all the null values, default
> values are added.
> (eg: if (queryAnalyzer == null)
>  queryAnalyzer = checker.getQueryAnalyzer(); )
> Now so i feel that the Null checker value is sent when
> /checkers.add(checker);/  is executed.
>
> If i am right tell me how to resolve this,else what has gone wrong.
> Thanks in advance.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Null-pointer-exception-in-spell-checker-at-addchecker-method-tp4105489.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>

Solr with tomcat installation fails to respond on low load

2013-12-08 Thread shinkanze
Hi All,

I am having a problem with one of my *Slave  solr* servers .
My solr server fails to respond to *queries every alternate  night* although
it is able to answer 3-4 times queries in day . Queries takes very large
time to answer them (50 to 1000 times) and I start receiving
ClientABortExeptions. And I have to restart it .

we have replication cycle of every 15 minutes during day .

index size is 300 mb .
Memory on server 12 gb
Memory provided to tomcat 3-6 GB
No of cpu 4 .

One cpu core has a cpu utilization of 100%

There is no replication during night 

 
Other slave with same configuration is working fine .

I ruled out GC because i had to restart server twice at an interval of 5 hrs
. which I Think is not enough for garbage collector to kick in under 1/3
load.

I am unable to find any lead in this  Please help me to reach a solution

regards

Rajat




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-with-tomcat-installation-fails-to-respond-on-low-load-tp4105634.html
Sent from the Solr - User mailing list archive at Nabble.com.