lock
being single on both machines, that solved the issue.
I would however love to know what caused that error (its never too late to
learn, right ???)
Thanks,
Ravi Kiran
On Mon, Feb 7, 2011 at 2:51 PM, Chris Hostetter wrote:
>
> : While reloading a core I got this following error,
Thanks for updating your solution
On Tue, Feb 8, 2011 at 8:20 AM, shan2812 wrote:
>
> Hi,
>
> At last the migration to Solr-1.4.1 does solve this issue :-)..
>
> Cheers
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Http-Connection-is-hanging-while-deleteByQuery-tp2367
-0500|SEVERE|sun-appserver2.1|org.apache.solr.update.SolrIndexWriter|_ThreadID=82;_ThreadName=Finalizer;_RequestID=121fac59-7b08-46b9-acaa-5c5462418dc7;|SolrIndexWriter
was not closed prior to finalize(), indicates a bug -- POSSIBLE RESOURCE
LEAK!!!|#]
Ravi Kiran Bhaskar
solution
true
1
0
Ravi Kiran Bhaskar
On Mon, Jan 31, 2011 at 6:19 AM, shan2812 wrote:
>
> This is the log trace..
>
> 2011-01-31 10:07:18,837 ERROR (main)[SearchBusinessControllerImpl] Solr
> connecting to url: http://10.145.1
Solr 1.4.1
Ravi Kiran Bhaskar
On Fri, Jan 28, 2011 at 4:23 PM, Ravi Kiran wrote:
> Hello,
> We have a core with about 900K docs. Recently I have noticed that
> the deleteById query seems to always give me a SocketTimeoutException(stack
> trace is shown below). I cannot figure
SocketTimeoutException. Do you also get timeout exception ? and how many
docs do you have?
Thanks
Ravi Kiran
On Fri, Jan 28, 2011 at 12:52 PM, shan2812 wrote:
>
> Though it may not be needed, just to ad..
>
> this is how I delete by query
>
> UpdateResponse updateResponse = solr
lrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:416)
... 13 more
Thanks
Ravi Kiran
context root. My solr instance is deployed on Glassfish. Alternately, if
there is a configurable way to setup multiple context roots for the same
solr instance that will suffice at this point of time.
Ravi Kiran
; 1 not
recommended.
} catch (MalformedURLException mex) {
throw new SolrCustomException("Cannot resolve Solr Server at
'" + url + "'\n", mex);
}
return server;
}
}
Thanks,
Ravi Kiran Bhaskar
On Wed, Dec
. Please let me
know if any more info is required
100
12
Thanks,
*Ravi Kiran Bhaskar*
Yes I also did see the exclude="true" in an example elevate.xml...was
wondering what it does precisely and if "text" MUST have a value ? I couldnt
find any documentation explaining it
Ravi Kiran Bhaskar
Principal Software Engineer
Washington Post
1150 15th S
ust adding 100
or 200 IDs to exclude could cause troubles. This is the exactly why I am
trying to find a configuration option as opposed to writing filter queries
Thank you all for actively helping me out.
Ravi Kiran Bhaskar
Principal Software Engineer
Washington Post
1150 15th Street NW, Washington
that I could negatively boost certain docs but could
not find any.
Can anybody kindly point me in the right direction.
Thanks
Ravi Kiran Bhaskar
options, I
shall pursue them...asking a question on a forum is fun when you have
knowledgeable people, isnt it??? That's true reuse of resources in software
terms :-) , reuse of knowledge in developer space !!!.
Ravi Kiran Bhaskar
Principal Software Engineer
The Washington Post
On Thu, Sep 16, 20
q_.28Boost_Query.29
> ________
> From: Ravi Kiran [ravi.bhas...@gmail.com]
> Sent: Wednesday, September 15, 2010 10:02 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Boosting specific field value
>
> Erick,
> I afraid you misinterpreted my issueif
ource in the fq 2 times as I need docs that
have source values too just that they will have a lower boost
Thanks,
Ravi Kiran Bhaskar
On Wed, Sep 15, 2010 at 1:34 PM, Erick Erickson wrote:
> This seems like a simple query-time boost, although I may not be
> understanding
> your problem well.
Hello,
I am currently querying solr for a "*primarysection*" which will
return documents like - *q=primarysection:(Politics* OR
Nation*)&fq=contenttype:("Blog" OR "Photo Gallery) pubdatetime:[NOW-3MONTHS
TO NOW]"*. Each document has several fields of which I am most interested in
single val
rdIndexReaderFactory.newReader(StandardIndexReaderFactory.java:38)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1056) ... 21 more
Ravi Kiran Bhaskar
On Tue, Aug 3, 2010 at 11:15 AM, Ravi Kiran wrote:
> Hello Mr.Hostetter,
> Thank you very much fo
Hello Mr.Hostetter,
Thank you very much for the clarification. I do
remember that when I first deployed the solr code from trunk on a test
server I couldnt open the index (careted via 1.4) even via the solr admin
page, It kept giving me corrupted index EOF kind of except
.4). Is re-indexing my only option or is there a tool of some sort to
convert the 1.4 index to 3.1 format ?
Thanks,
Ravi Kiran
our continued interest in answering my questions.
Ravi Kiran Bhaskar
On Thu, Jul 1, 2010 at 7:08 PM, Jan Høydahl / Cominvent <
jan@cominvent.com> wrote:
> Hi,
>
> Another more complex approach is to design a routine that once in a while
> selectively decides what documen
d word word word source start,end 0,12 0,12 0,12 0,12 0,12
On Thu, Jul 1, 2010 at 7:04 AM, Ahmet Arslan wrote:
>
>
> --- On Thu, 7/1/10, Ravi Kiran wrote:
>
> > From: Ravi Kiran
> > Subject: Dilemma - Very Frequent Synonym updates for Huge Index
> > To: solr-user@
e fly with extracted entities from
OpenNLP and then index it straight into SOLR. However, we do some sanity
checks for locations prior to indexing using wordnet so that false positives
are avoided in location names.
Thanks,
Ravi Kiran Bhaskar
On Thu, Jul 1, 2010 at 5:40 AM, Jan Høydahl / Comin
Times does in url given
below...thats exactly what Iam trying to do
http://topics.nytimes.com/topics/reference/timestopics/all/b/index.html
Thanks,
Ravi Kiran Bhaskar
On Thu, Jul 1, 2010 at 7:04 AM, Ahmet Arslan wrote:
>
>
> --- On Thu, 7/1/10, Ravi Kiran wrote:
>
> > F
Hello,
Hoping some solr guru can help me out here. We are a news
organization trying to migrate 10 million documents from FAST to solr. The
plan is to have our Editorial team add/modify synonyms multiple times during
a day as they deem appropriate. Hence we plan on using query time synonyms
Thank you very much...I shall try out the tokenizerFactory attribute on
SynonymFilterFactory
On Tue, Oct 13, 2009 at 12:27 AM, Chris Hostetter
wrote:
>
> : I had to be brief as my facets are in the order of 100K over 800K
> documents
> : and also if I give the complete schema.xml I was afraid nob
Hello Mr.Hostetter,
Thank you for patiently reading through my post, I apologize for being
cryptic in my previous messages..
>>when you cut/pasted the facet output, you excluded the field names. based
>>on the schema & solrconfig.xml snippets you posted later, i'm assuming
>>they are usstate, and
analyzed at all.
>
> On the query you provided before I didn't see the parameters to tell solr
> for which field it should produce facets.
>
> Something like:
>
>
> http://localhost:8080/solr-admin/topicscore/select/?facet=true&facet.limit=-1&*facet.field=
Yes Exactly the same
On Tue, Oct 6, 2009 at 4:52 PM, Christian Zambrano wrote:
> And you had the analyzer for that field set-up the same way as shown on
> your previous e-mail when you indexed the data?
>
>
>
>
> On 10/06/2009 03:46 PM, Ravi Kiran wrote:
>
>> I did
to see what tokens are generated for
> the string "New York"? It could be one of the token filter is adding the
> token 'new' for all strings that start with 'new'
>
>
> On 10/06/2009 02:54 PM, Ravi Kiran wrote:
>
>> Hello All,
>>
Hello All,
Iam getting some ghost facets in solr 1.4. Can anybody kindly
help me understand why I get them and how to eliminate them. My schema.xml
snippet is given at the end. Iam indexing Named Entities extracted via
OpenNLP into solr. My understanding regarding KeywordTokenizerFact
---
Ravi Kiran Bhaskar
32 matches
Mail list logo