Regarding Response Builder

2009-07-12 Thread Amandeep Singh09
The responsebuiilder class has SolrQueryRequest as public type. Using SolrQueryRequest we can get a list of SolrParams like SolrParams params = req.getParams(); Now I want to get the values of those params. What should be the approach as SolrParams is an abstract class and its get(String) metho

Re: Deleting from SolrQueryResponse

2009-07-12 Thread pof
Okay. So still, how would I go about creating a new DocList and Docset as they cannot be instantiated? Thanks, Brett. hossman wrote: > > > : > one thing to keep in mind however is that post-processing a DocList to > : > filter stuff out is almost never a good idea -- things get really > : >

Re: Replication on Slave startup (Solr 1.4)

2009-07-12 Thread Noble Paul നോബിള്‍ नोब्ळ्
I guess initialDelay can be set to 0. that should solve this .. On Mon, Jul 13, 2009 at 9:16 AM, Yonik Seeley wrote: > On Sun, Jul 12, 2009 at 11:10 PM, Mark Miller wrote: >> I wonder if using a 0 initialDelay would make more sense? > > Hmmm, that raises the question of when an initialDelay > 0 do

Re: dropping index at startup

2009-07-12 Thread Mark Miller
On Sun, Jul 12, 2009 at 10:55 AM, manuel aldana wrote: > is it possible to clean up solr index by passing a start param? currently I > am deleting the data/ folder to achieve this, which feels a bit unnatural. > It would be cool to have something like -Dsolr.drop.index as parameter. A great and

Re: Replication on Slave startup (Solr 1.4)

2009-07-12 Thread Yonik Seeley
On Sun, Jul 12, 2009 at 11:10 PM, Mark Miller wrote: > I wonder if using a 0 initialDelay would make more sense? Hmmm, that raises the question of when an initialDelay > 0 does make sense... what's the use case? -Yonik http://www.lucidimagination.com

Re: Replication on Slave startup (Solr 1.4)

2009-07-12 Thread Mark Miller
Jordan Mendler wrote: Hi all, I am having an issue where Solr slaves do not pull replication data on startup. Rather they wait until the amount of time of poll_interval, before pulling the initial index. This is causing new and rebooted nodes to have a stale index for polling_interval seconds, i

Re: Can't limit return fields in custom request handler

2009-07-12 Thread Chris Hostetter
: Query filter = new TermQuery(new Term("inStores", "true")); that will work if "inStores" is a TextField or a StrField and it's got the term "true" indexed in it ... but if it's a BoolField like in the example schema then the values that appear in the index are "T" and "F" When yo

Replication on Slave startup (Solr 1.4)

2009-07-12 Thread Jordan Mendler
Hi all, I am having an issue where Solr slaves do not pull replication data on startup. Rather they wait until the amount of time of poll_interval, before pulling the initial index. This is causing new and rebooted nodes to have a stale index for polling_interval seconds, in addition to the amount

Boosting certain documents dynamically at query-time

2009-07-12 Thread Michael Lugassy
Hi guys -- Using solr 1.4 functions at query-time, can I dynamically boost certain documents which are: a) not on the same range, i.e. have very different document ids, b) have different boost values, c) part of a long list (can be around 1,000 different document ids with 50 different boost values

Re: printing scores

2009-07-12 Thread Chris Hostetter
: I have noticed a weird behabiour doing score testing. I do a search using : dismax request handler with no extra boosting in a index of a milion docs : searching in five fields. : Printing the score of the docs 3th,4th,5fh,6th I can see that is the same. : If I build the index with my own lucene

Re: solr jmx connection

2009-07-12 Thread Chris Hostetter
: > However, I am having a harder time trying to access the SOLR MBeans. First, : > I could have the wrong service URL. Second, I'm confused as to which MBeans : > SOLR provides. : > : >: The service url is of the form -- : "service:jmx:rmi:///jndi/rmi://localhost:/solr". The following code :

DutchStemFilterFactory reducing double vowels bug ?

2009-07-12 Thread Jan Murre
Hi, Some time ago I configured my Solr instance to use the DutchStemFilterFactory. When used during indexing or query-ing, the filter reduces double vowels to single vowels, which is not always what we want. Words like 'baas', 'paas', 'maan', 'boom' etc. are indexed as 'bas', 'pas', 'man' and 'b

dropping index at startup

2009-07-12 Thread manuel aldana
is it possible to clean up solr index by passing a start param? currently I am deleting the data/ folder to achieve this, which feels a bit unnatural. It would be cool to have something like -Dsolr.drop.index as parameter. btw, how does solr generally handle documents in index which aren't ma

Re: Caching per segmentReader?

2009-07-12 Thread Yonik Seeley
On Sat, Jul 11, 2009 at 7:38 PM, Jason Rutherglen wrote: > Are we planning on implementing caching (docsets, documents, results) per > segment reader or is this something that's going to be in 1.4? Yes, I've been thinking about docsets and documents (perhaps not results) per segment. It won't make

Re: Select tika output for extract-only?

2009-07-12 Thread Yonik Seeley
Peter, I'm hacking up solr cell right now, trying to simplify the parameters and fix some bugs (see SOLR-284) A quick patch to specify the output format should make it into 1.4 - but you may want to wait until I finish. -Yonik http://www.lucidimagination.com On Sat, Jul 11, 2009 at 5:39 PM, Peter

Re: Select tika output for extract-only?

2009-07-12 Thread Grant Ingersoll
On Jul 11, 2009, at 5:39 PM, Peter Wolanin wrote: I had been assuming that I could choose among possible tika output formats when using the extracting request handler in extract-only mode as if from the CLI with the tika jar: -x or --xmlOutput XHTML content (default) -h or --html

Re: Create incremental snapshot

2009-07-12 Thread tushar kapoor
Thanks for the reply Asif. We have already tried removing the optimization step. Unfortunately the commit command alone is also causing an identical behaviour . Is there any thing else that we are missing ? Asif Rahman wrote: > > Tushar: > > Is it necessary to do the optimize on each iteration