Fwd: HTTP Status 400 - org.apache.lucene.queryParser.ParseException
-- Forwarded message -- From: kun xiong Date: 2011/1/18 Subject: HTTP Status 400 - org.apache.lucene.queryParser.ParseException To: solr-user@lucene.apache.org Hi all, I got a ParseException when I query solr with Lucene BooleanQuery expression (toString()). I use the default parser : LuceneQParserPlugin,which should support whole lucene syntax,right? Java Code: BooleanQuery bq = new BooleanQuery(); Query q1 = new TermQuery(new Term("I_NAME_ENUM", "KFC")); Query q2 = new TermQuery(new Term("I_NAME_ENUM", "MCD")); bq.add(q1, Occur.SHOULD); bq.add(q2, Occur.SHOULD); bq.setMinimumNumberShouldMatch(1); String solrQuery = bq.toString(); query string is : q=(I_NAME_ENUM:kfc I_NAME_ENUM:best western)~1 Exceptions : *message* *org.apache.lucene.queryParser.ParseException: Cannot parse '(I_NAME_ENUM:kfc I_NAME_ENUM:best western)~1': Encountered " "~1 "" at line 1, column 42. Was expecting one of: ... ... ... "+" ... "-" ... "(" ... "*" ... "^" ... ... ... ... ... "[" ... "{" ... ... * *description* *The request sent by the client was syntactically incorrect (org.apache.lucene.queryParser.ParseException: Cannot parse '(I_NAME_ENUM:kfc I_NAME_ENUM:best western)~1': Encountered " "~1 "" at line 1, column 42. Was expecting one of: ... ... ... "+" ... "-" ... "(" ... "*" ... "^" ... ... ... ... ... "[" ... "{" ... ... ).* * * Anyone could help? Thanks Kun * *
HTTP Status 400 - org.apache.lucene.queryParser.ParseException
Hi all, I got a ParseException when I query solr with Lucene BooleanQuery expression (toString()). I use the default parser : LuceneQParserPlugin,which should support whole lucene syntax,right? Java Code: BooleanQuery bq = new BooleanQuery(); Query q1 = new TermQuery(new Term("I_NAME_ENUM", "KFC")); Query q2 = new TermQuery(new Term("I_NAME_ENUM", "MCD")); bq.add(q1, Occur.SHOULD); bq.add(q2, Occur.SHOULD); bq.setMinimumNumberShouldMatch(1); String solrQuery = bq.toString(); query string is : q=(I_NAME_ENUM:kfc I_NAME_ENUM:best western)~1 Exceptions : *message* *org.apache.lucene.queryParser.ParseException: Cannot parse '(I_NAME_ENUM:kfc I_NAME_ENUM:best western)~1': Encountered " "~1 "" at line 1, column 42. Was expecting one of: ... ... ... "+" ... "-" ... "(" ... "*" ... "^" ... ... ... ... ... "[" ... "{" ... ... * *description* *The request sent by the client was syntactically incorrect (org.apache.lucene.queryParser.ParseException: Cannot parse '(I_NAME_ENUM:kfc I_NAME_ENUM:best western)~1': Encountered " "~1 "" at line 1, column 42. Was expecting one of: ... ... ... "+" ... "-" ... "(" ... "*" ... "^" ... ... ... ... ... "[" ... "{" ... ... ).* * * Anyone could help? Thanks Kun * *
Re: HTTP Status 400 - org.apache.lucene.queryParser.ParseException
Hi Erick, Thanks for the fast reply. I kind of figured it was not supposed to be that way. But it would have some benefits when we need migrate from Lucene to Solr. We don't have to rewrite the build query part, right. Is there any parser can do that? 2011/1/18 Ahmet Arslan > > what's the alternative? > > q=kfc+mdc&defType=dismax&mm=1&qf=I_NAME_ENUM > > See more: http://wiki.apache.org/solr/DisMaxQParserPlugin > > > >
Which QueryParser to use
Hi all We are planning to move our search core from Lucene library to Solr, and we are new here. We have a question :which parser we should choose? Our original query for Lucene is kinda of complicated Ex: *+((name1:A name2:B)^1000 (category1:C ^100 category:D ^10) ^100) +(location1:E location2:F location3:G)~2* Does the *dismax *query parser can handle this case, what's the alternative? Or we can still use the *lucene *query parser without setMinimumNumberShouldMatch, which is not involved in lucene query parser. Thanks Kun
Re: Which QueryParser to use
We construct our query by Lucene API before, as BooleanQuery, TermQuery those kind of things. The string I provided is value from Query.toString() methord. Type are all String. 2011/1/20 Ahmet Arslan > > Hi all > > We are planning to move our search core from > > Lucene library to Solr, and > > we are new here. > > > > We have a question :which parser we should choose? > > > > Our original query for Lucene is kinda of complicated > > Ex: *+((name1:A name2:B)^1000 (category1:C ^100 > > category:D ^10) ^100) > > +(location1:E location2:F location3:G)~2* > > > > Does the *dismax *query parser can handle this case, what's > > the alternative? > > > > Or we can still use the *lucene *query parser without > > setMinimumNumberShouldMatch, > > which is not involved in lucene query parser. > > As I understand you were constructing your queries programmatically, > without using Lucene's QueryParser, right? If yes how were you handling > analysis of query terms? Can you tell the types of these fields > (location,name)? > > > >
Re: Which QueryParser to use
Thar example string means our query is BooleanQuery containing BooleanQuerys. I am wondering how to write a complicated BooleanQuery for dismax, like (A or B or C) and (D or E) Or I have to use Lucene query parser. 2011/1/20 Lalit Kumar 4 > > Sent on my BlackBerry® from Vodafone > > -Original Message- > From: Ahmet Arslan > Date: Thu, 20 Jan 2011 10:43:46 > To: solr-user@lucene.apache.org > Reply-To: "solr-user@lucene.apache.org" > Subject: Re: Which QueryParser to use > > > Hi all > > We are planning to move our search core from > > Lucene library to Solr, and > > we are new here. > > > > We have a question :which parser we should choose? > > > > Our original query for Lucene is kinda of complicated > > Ex: *+((name1:A name2:B)^1000 (category1:C ^100 > > category:D ^10) ^100) > > +(location1:E location2:F location3:G)~2* > > > > Does the *dismax *query parser can handle this case, what's > > the alternative? > > > > Or we can still use the *lucene *query parser without > > setMinimumNumberShouldMatch, > > which is not involved in lucene query parser. > > As I understand you were constructing your queries programmatically, > without using Lucene's QueryParser, right? If yes how were you handling > analysis of query terms? Can you tell the types of these fields > (location,name)? > > > >
Re: Which QueryParser to use
Thanks a lot for your reply.That was very helpful. We construct our lucene query after certain analysis(ex : words segmentation, category identification). Do you mean we plugin those analysis logic and query construction part onto solr, and solr takes the very beginning input. Kun 2011/1/20 Ahmet Arslan > > We construct our query by Lucene API > > before, as BooleanQuery, TermQuery > > those kind of things. > > > Okey, it seems that your field are not analyzed and you don't do any > analysis while construction of your query by Lucene API. Correct? > > Then you can use your existing Java code directly inside a solr plugin. > > http://wiki.apache.org/solr/SolrPlugins#QParserPlugin > > Existing sub-classes QParserPlugin can give you an idea. > > > >
Re: Which QueryParser to use
Okey, thanks very much. 2011/1/21 Ahmet Arslan > > We construct our lucene query after certain analysis(ex : > > words > > segmentation, category identification). > > By analysis, I referring charfilter(s)+tokenizer+tokenfilter(s) > combination. > > > Do you mean we > > plugin those analysis > > logic and query construction part onto solr, and solr takes > > the very > > beginning input. > > > > I didn't understand what "very beginning input" is. > > Lets say you have pure java program that takes a String as an input, and > returns org.apache.lucene.search.Query as output. You can embed this into > solr. e.g. Query constructMyMagicQuery(String) > > public QParser createParser(String qstr, SolrParams localParams, SolrParams > params, SolrQueryRequest req) { >return new QParser(qstr, localParams, params, req) { > public Query parse() throws ParseException { > String query = params.get(CommomParams.Q); > return constructMyMagicQuery(query); > } >} > } > > Your custom program can read/use any key-value pairs from the search URL, > if required. > > > >
How call I make one request for all cores and get response classified by cores
I have a group of subindex, each of which is a core in my solr now. I want to make one query for some of them, how can I do that? And classify response doc by index, using facet search? Thanks Kun
How could I set multi-value for a field in DataImporter
Since the interface of DataImporter return a Map, I can't put multi value for a same field, right? Example: I write a class extending DataImporter, and want to index {"value1", "value2"} for field "name". How should I do? Many thanks. Kun
Re: How could I set multi-value for a field in DataImporter
Stefan, Thanks very much for your quick reply. Actually I have to write a CustomDataImporter class to full-import data and index them all. So it should be done in java code and schema.xml. When I write a CustomDataImporter, I have to implement a nextRow() method, which return a map. And also schema,xml have a multiValued label for each field. I am wondering how could I utilize it. I believe there must be several ways to make it multi-valued, using analyzer or copyField. I am finding a efficient and easiest way that I don't have to change data format. Kun 2011/3/31 Stefan Matheis > Kun, > > it should be enough to use the same field second time, like this: > > value2 > > Regards > Stefan > > On Thu, Mar 31, 2011 at 11:39 AM, kun xiong wrote: > > Since the interface of DataImporter return a Map, I can't put multi value > > for a same field, right? > > > > Example: > > > > I write a class extending DataImporter, and want to index {"value1", > > "value2"} for field "name". > > > > How should I do? > > > > Many thanks. > > > > Kun > > >
Re: How could I set multi-value for a field in DataImporter
I found the answer from source code. Using a Collection as value. Thanks any way 2011/4/1 kun xiong > Stefan, > > Thanks very much for your quick reply. > > Actually I have to write a CustomDataImporter class to full-import data and > index them all. > > So it should be done in java code and schema.xml. > > When I write a CustomDataImporter, I have to implement a nextRow() method, > which return a map. And also schema,xml have a multiValued > label for each field. > > I am wondering how could I utilize it. > > I believe there must be several ways to make it multi-valued, using > analyzer or copyField. I am finding a efficient and easiest way that I don't > have to change data format. > > Kun > > 2011/3/31 Stefan Matheis > >> Kun, >> >> it should be enough to use the same field second time, like this: >> >> value2 >> >> Regards >> Stefan >> >> On Thu, Mar 31, 2011 at 11:39 AM, kun xiong wrote: >> > Since the interface of DataImporter return a Map, I can't put multi >> value >> > for a same field, right? >> > >> > Example: >> > >> > I write a class extending DataImporter, and want to index {"value1", >> > "value2"} for field "name". >> > >> > How should I do? >> > >> > Many thanks. >> > >> > Kun >> > >> > >
How could each core share configuration files
Hi all, Currently in my project , most of the core configurations are same(solrconfig.xml, dataimport.properties...), which are putted in their own folder as reduplicative. I am wondering how could I put common ones in one folder, which each core could share, and keep the different ones in their own folder still. Thanks Kun
solr/home property seting
Hi all, I am wondering how could I set a path for solr/home property. Our solr home is inside the solr.war, so I don't want a absolute path(will deploy to different boxes). Currently I hard code a relative path as solr/home property in web.xml. solr/home *../webapps/solr/home* java.lang.String But in this way, I have to start tomcat under bin/. Seems the root path here is start path. How could I set solr/home property to get ride of tomcat start path. Thanks Kun
Re: [POLL] How do you (like to) do logging with Solr
[ ] I always use the JDK logging as bundled in solr.war, that's perfect [X ] I sometimes use log4j or another framework and am happy with re-packaging solr.war [ ] Give me solr.war WITHOUT an slf4j logger binding, so I can choose at deploy time [x ] Let me choose whether to bundle a binding or not at build time, using an ANT option [ ] What's wrong with the "solr/example" Jetty? I never run Solr elsewhere! [ ] What? Solr can do logging? How cool! 2011/5/19 Chris Hostetter > > : An alternative to manually repackage solr.war as in #1, is Hoss' > : suggestion in SOLR-2487 of a new ANT option to build Solr artifacts > : without the JUL binding. > > More specificly, i'm advocating a new ANT property that would let you > specify (by path) whatever SLF4J binding jar you want to include, or > that you don't want any SLF4J binding jar included (by specifying a path > to a jar that doesn't exist) > > I want the default... >ant dist > > I don't want a binding in solr.war... >ant -Dslf4j.jar.path=BOGUS_FILE_PATH dist > > I want a specific binding in solr.war... >ant -Dslf4j.jar.path=/my/lib/slf4j-jcl-*.jar dist > > > -Hoss >
How to involve JMX by configuration
Hi, I am wondering how to start JMX monitor without code change. Currently, I have to insert code "LocateRegistry.createRegistry();" into SolrCore.java. And I specify at solrconfig.xml. Can I make it by only configuration change? Thanks Kun
How could I monitor solr cache
Hi, I am wondering how could I get solr cache running status. I know there is a JMX containing those information. Just want to know what tool or method do you make use of to monitor cache, in order to enhance performance or detect issue. Thanks a lot Kun
Re: How could I monitor solr cache
I working on dev performance turning. I am looking for a method that could record cache status into log files. On Tue, Jul 19, 2011 at 2:24 PM, Ahmet Arslan wrote: > > > I am wondering how could I get solr cache running status. I > > know there is a > > JMX containing those information. > > > > Just want to know what tool or method do you make use of to > > monitor cache, > > in order to enhance performance or detect issue. > > You might find this interesting : > > http://sematext.com/spm/solr-performance-monitoring/index.html > http://sematext.com/spm/index.html >