Re: Disable (or prohibit) per-field overrides
Hi, Thanks for the suggestion and pointer. We've implemented it using a single regex in Nginx for now. Cheers, > : Anyone knows useful method to disable or prohibit the per-field override > : features for the search components? If not, where to start to make it > : configurable via solrconfig and attempt to come up with a working patch? > > If your goal is to prevent *clients* from specifying these (while you're > still allowed to use them in your defaults) then the simplest solution is > probably something external to Solr -- along the lines of mod_rewrite. > > Internally... > > that would be tough. > > You could probably write a SearchComponent (configured to run "first") > that does it fairly easily -- just wrap the SolrParams in an impl that > retuns null anytime a component asks for a param name that starts with > "f." (and excludes those param names when asked for a list of the param > names) > > > It could probably be generalized to support arbitrary rules i na way > that might be handy for other folks, but it would still just be > wrapping all of hte params, so it would prevent you from using them > in your config as well. > > Ultimatley i think a general solution would need to be in > RequestHandlerBase ... where it wraps the request params using the > defaults and invariants ... you'd want the custom exclusion rules to apply > only to the request params from the client. > > > > > -Hoss
Re: SolrJ new javabin format
Well, in Nutch we simply replace the two jars and it all still works. > The CHANGES.txt file in branch_3x says that the javabin format has > changed in Solr 3.1, so you need to update SolrJ as well as Solr. Is > the SolrJ included in 3.1 compatible with both 3.1 and 1.4.1? If not, > that's going to make a graceful upgrade of my replicated distributed > installation a little harder. > > Thanks, > Shawn
Re: How do you programatically create new cores?
You have to create the core's folder with it's conf inside the Solr home. Once done you can call the create action of the admin handler: http://wiki.apache.org/solr/CoreAdmin#CREATE If you need to dinamically create, start and stop lots of cores there's this patch, but don't know about it's current state: http://wiki.apache.org/solr/LotsOfCores -- View this message in context: http://lucene.472066.n3.nabble.com/How-do-you-programatically-create-new-cores-tp1706487p1718648.html Sent from the Solr - User mailing list archive at Nabble.com.
query between two date
Hi there, At first i have to explain the situation. I have 2 fields indexed named tdm_avail1 and tdm_avail2 that are arrays of some different dates. "This is a sample doc" 2010-10-21T08:29:43Z 2010-10-22T08:29:43Z 2010-10-25T08:29:43Z 2010-10-26T08:29:43Z 2010-10-27T08:29:43Z 2010-10-19T08:29:43Z 2010-10-20T08:29:43Z 2010-10-21T08:29:43Z 2010-10-22T08:29:43Z And in my search form i have 2 field named check-in date and check-out date. I want solr to compare the range that user enter in the search form with the values of tdm_avail1 and tdm_avail2 and return doc if all dates between check-in and check-out dates matches with tdm_avail1 or tdm_avail2 values. for example if user enter: check-in date: 2010-10-19 check-out date: 2010-10-21 that is match with tdm_avail2 then doc must be returned. but if user enter: check-in date: 2010-10-25 check-out date: 2010-10-29 doc could not be returned. so i want the query that gives me the mentioned result. could you help me please? thanks in advance -- View this message in context: http://lucene.472066.n3.nabble.com/query-between-two-date-tp1718566p1718566.html Sent from the Solr - User mailing list archive at Nabble.com.
Spanning an index across multiple volumes
Is it possible to get an index to span multiple disk volumes - i.e. when its 'primary' volume fills up (or optimize needs more room), tell Solr/Lucene to use a secondary/tertiary/quaternary et al volume? I've not seen any configuration that would allow this, but maybe others have a use case for such functionality? Thanks, Peter
DIH - configure password in 1 place and store it in encrypted form?
Hi! I have multiple cores reading from the same database and I've provided the user credentials in all data-config.xml files. Is there a way to tell JdbcDataSource in data-config.xml to read the username and password from a file? This would help me not to change the username/password in multiple data-config.xml files. And is it possible to store the password in encrypted and let the DIH to call the decrypter to read the password? Thanks a lot. -- Arun
Re: Spanning an index across multiple volumes
Juggling disk volumes does not sound like a logical responsibility for Solr to me. Solr/Lucene expects to have enough room to live in. Better to push this down to the OS level. There are all kinds of logical volume managers around which lets you add new disks to the same logical volume, achieving what you want. Or if you run on a cloud architecture, you may increase disk with a couple of API calls triggered by monitoring... -- Jan Høydahl, search solution architect Cominvent AS - www.cominvent.com On 17. okt. 2010, at 12.38, Peter Sturge wrote: > Is it possible to get an index to span multiple disk volumes - i.e. > when its 'primary' volume fills up (or optimize needs more room), tell > Solr/Lucene to use a secondary/tertiary/quaternary et al volume? > > I've not seen any configuration that would allow this, but maybe > others have a use case for such functionality? > > Thanks, > Peter
indexing mysql database
i try to index table in mysql database, 1st i create db-config.xml file which contain followed by and defining of table like 2nd i add this field in schema.xml file and finally decide in solronfig.xml file the db-config.xml file as db-data-config.xml i found index folder which contain only segment.gen & segment_1 files and when try to search no result i got any body can present a help ??? thanks in advance -- View this message in context: http://lucene.472066.n3.nabble.com/indexing-mysql-database-tp1719883p1719883.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: indexing mysql database
Two suggestions: a) Noticed that your dih spec in the solrconfig.xml seems to to refer to "db-data-config.xml" but you said that your file was db-config.xml. You may want to check this to make sure that your file names are correct. b) what does your log say when you ran the import process? - Bill -Original Message- From: do3do3 Sent: Sunday, October 17, 2010 8:29 AM To: solr-user@lucene.apache.org Subject: indexing mysql database i try to index table in mysql database, 1st i create db-config.xml file which contain followed by and defining of table like 2nd i add this field in schema.xml file and finally decide in solronfig.xml file the db-config.xml file as db-data-config.xml i found index folder which contain only segment.gen & segment_1 files and when try to search no result i got any body can present a help ??? thanks in advance -- View this message in context: http://lucene.472066.n3.nabble.com/indexing-mysql-database-tp1719883p1719883.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: How do you programatically create new cores?
Hi Marc, Thanks for the reply. So as I understand I need to make a http get call with an action parameter set to create to dynamically create a core? I do not see an API to do this anywhere. On Oct 17, 2010, at 3:54 PM, Marc Sturlese wrote: > > You have to create the core's folder with it's conf inside the Solr home. > Once done you can call the create action of the admin handler: > http://wiki.apache.org/solr/CoreAdmin#CREATE > If you need to dinamically create, start and stop lots of cores there's this > patch, but don't know about it's current state: > http://wiki.apache.org/solr/LotsOfCores > > -- > View this message in context: > http://lucene.472066.n3.nabble.com/How-do-you-programatically-create-new-cores-tp1706487p1718648.html > Sent from the Solr - User mailing list archive at Nabble.com.
Re: DIH - configure password in 1 place and store it in encrypted form?
On Sun, Oct 17, 2010 at 7:02 PM, Arunkumar Ayyavu wrote: > Hi! > > I have multiple cores reading from the same database and I've provided > the user credentials in all data-config.xml files. Is there a way to > tell JdbcDataSource in data-config.xml to read the username and > password from a file? This would help me not to change the > username/password in multiple data-config.xml files. > > And is it possible to store the password in encrypted and let the DIH > to call the decrypter to read the password? [...] As far as I am aware, it is not possible to do either of the two options above. However, one could extend the JdbcDataSource class to add such functionality. Regards, Gora
how can i use solrj binary format for indexing?
Hi all I have a huge amount of xml files for indexing. I want to index using solrj binary format to get performance gain. Because I heard that using xml files to index is quite slow. But I don't know how to use index through solrj binary format and can't find examples. Please give some help. Thanks, -- View this message in context: http://lucene.472066.n3.nabble.com/how-can-i-use-solrj-binary-format-for-indexing-tp1722612p1722612.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Spanning an index across multiple volumes
Hi, The closest thing I can think of is FileSwitchDirectory ( http://search-lucene.com/?q=FileSwitchDirectory ) Otis Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch Lucene ecosystem search :: http://search-lucene.com/ - Original Message > From: Jan Høydahl / Cominvent > To: solr-user@lucene.apache.org > Sent: Sun, October 17, 2010 10:15:33 AM > Subject: Re: Spanning an index across multiple volumes > > Juggling disk volumes does not sound like a logical responsibility for Solr > to >me. Solr/Lucene expects to have enough room to live in. > Better to push this down to the OS level. There are all kinds of logical >volume managers around which lets you add new disks to the same logical >volume, >achieving what you want. Or if you run on a cloud architecture, you may >increase disk with a couple of API calls triggered by monitoring... > > -- > Jan Høydahl, search solution architect > Cominvent AS - www.cominvent.com > > On 17. okt. 2010, at 12.38, Peter Sturge wrote: > > > Is it possible to get an index to span multiple disk volumes - i.e. > > when its 'primary' volume fills up (or optimize needs more room), tell > > Solr/Lucene to use a secondary/tertiary/quaternary et al volume? > > > > I've not seen any configuration that would allow this, but maybe > > others have a use case for such functionality? > > > > Thanks, > > Peter > >
Re: how can i use solrj binary format for indexing?
On Mon, Oct 18, 2010 at 8:31 AM, Jason, Kim wrote: > > Hi all > I have a huge amount of xml files for indexing. > I want to index using solrj binary format to get performance gain. > Because I heard that using xml files to index is quite slow. [...] Do not know about SolrJ's binary format, but indexing through XML is quite fast in our experience. Have you tried it out to see if it meets your requirements? Regards, Gora
How can i get collect stemmed query?
Hi~. I'm beginner who wanna make search system by using solr 1.4.1 and lucene 2.92. I got a collect lucene query from my custom Analyzer and filter from given query, but no result displayed. Here is my Analyzer source. -- public class KLTQueryAnalyzer extends Analyzer{ public static final Version LUCENE_VERSION = Version.LUCENE_29; public static int QUERY_MIN_LEN_WORD_FILTER = 1; public static int QUERY_MAX_LEN_WORD_FILTER = 40; public int elapsedTime = 0; @Override public TokenStream tokenStream(String paramString, Reader reader) { StandardTokenizer tokenizer = new StandardTokenizer( du.utas.mcrdr.ir.lucene.WebDocIR.LUCENE_VERSION, reader ); TokenStream tokenStream = new LengthFilter( tokenizer, QUERY_MIN_LEN_WORD_FILTER, QUERY_MAX_LEN_WORD_FILTER ); tokenStream = new LowerCaseFilter( tokenStream ); //My custom stemmer method KLTSingleWordStemmer stemer = new KLTSingleWordStemmer(QUERY_MIN_LEN_WORD_FILTER, QUERY_MAX_LEN_WORD_FILTER); //My custom analyzer filter. this filter return sub-merged query. //ex) INPUT : flyaway // RETURN VALUE : fly +body:away tokenStream = new KLTQueryStemFilter( tokenStream, stemer, this ); return tokenStream; } } -- example query) Input User query : +body:flyaway Expected analyzed query : +body:fly +body:away INDEXED DATA : body> fly away I'm expecting 1 docs returned from index, but I have no result returned. explain my custom flow 1. User input query : +body:flyaway 2. Analyzer return that : fly +body:away 3. Solr attach search field tag at filter returned query : "+body" as i defined at schema.xml.(default operator "AND") 4. I'm indexed 1 docs that have field name "body", has containing this phrase "fly away" 5. I expect 1 docs return of result by query "+body:fly +body:away" but 0 docs returned. What's the problem?? Anybody help me please~ :> -- View this message in context: http://lucene.472066.n3.nabble.com/How-can-i-get-collect-stemmed-query-tp1723055p1723055.html Sent from the Solr - User mailing list archive at Nabble.com.