help for preprocessing the query
Hi, Due some requirement I need to transform the user queries before passing it to the standard handler in Solr, can anyone suggest me the best way to do this. I will need to use a transfomation class that would provide functions to process the input query 'qIn' and transform it to the resultant query 'qOut' and then pass it to solr handler as if qOut were the original user query. thanks in anticipation, -umar
Re: help for preprocessing the query
Hi Umar, You may be able to preprocess your request parameter in your servlet filter. In the doFilter() method, you do: ServletRequest myRequest = new MyServletRequestWrapper( request ); : chain.doFilter( myRequest, response ); And you have MyServletRequestWrapper that extends ServletRequestWrapper. Then you can get|set q* parameters through getParameter() method. Hope this helps, Koji Umar Shah wrote: Hi, Due some requirement I need to transform the user queries before passing it to the standard handler in Solr, can anyone suggest me the best way to do this. I will need to use a transfomation class that would provide functions to process the input query 'qIn' and transform it to the resultant query 'qOut' and then pass it to solr handler as if qOut were the original user query. thanks in anticipation, -umar
MultiLingual Search
Hello folks, My project requires having the same content (mostly) in multiple languages. Do I need to have different files for the same content in different languages? This will need indexing of every file whenever a new content appears? Or can we have a mapping of the content with all other languages? Suppose I have hundred documents on particular search criteria (say Solr) in different languages, how can I get a Chinese result only when I enter Solr in Chinese as search criteria? Transliteration in this case will be handled from the front end. Can anyone tell me the steps to implement multilingual search in Solr as I'm very new to Solr? Thanks, Sachit DISCLAIMER: This message (including attachment if any) is confidential and may be privileged. If you have received this message by mistake please notify the sender by return e-mail and delete this message from your system. Any unauthorized use or dissemination of this message in whole or in part is strictly prohibited. E-mail may contain viruses. Before opening attachments please check them for viruses and defects. While MindTree Limited (MindTree) has put in place checks to minimize the risks, MindTree will not be responsible for any viruses or defects or any forwarded attachments emanating either from within MindTree or outside. Please note that e-mails are susceptible to change and MindTree shall not be liable for any improper, untimely or incomplete transmission. MindTree reserves the right to monitor and review the content of all messages sent to or from MindTree e-mail address. Messages sent to or from this e-mail address may be stored on the MindTree e-mail system or else where.
Re: MultiLingual Search
I would look toward some implementations: 1st: You could have one index for each language. Just record the preferred language in session and use it to select the index you are searching in. Pros: It is easy to add a new language, just create put another index instance online. Cons:It can become expensive and difficult to maintain as you need to replicate the changes on all indexes. 2nd: You could have fields like english_title, chinese_title, and so on, so you would use the language concatenated to the field name,getting all the fields you want when you know the language. Pros: It is easy to maintain. One index, just make a new field for each language. Restart the solr server and you have all you need. Cons: Each document will become large, so they will be slower to search through. 2008/5/12 Sachit P. Menon <[EMAIL PROTECTED]>: > Hello folks, > > > > My project requires having the same content (mostly) in multiple > languages. > > > > Do I need to have different files for the same content in different > languages? This will need indexing of every file whenever a new content > appears? > > Or can we have a mapping of the content with all other languages? > > > > Suppose I have hundred documents on particular search criteria (say Solr) > in > different languages, how can I get a Chinese result only when I enter Solr > in > Chinese as search criteria? Transliteration in this case will be handled > from > the front end. > > > > Can anyone tell me the steps to implement multilingual search in Solr as > I'm > very new to Solr? > > > > > > Thanks, > > Sachit > > > > DISCLAIMER: > This message (including attachment if any) is confidential and may be > privileged. If you have received this message by mistake please notify the > sender by return e-mail and delete this message from your system. Any > unauthorized use or dissemination of this message in whole or in part is > strictly prohibited. > E-mail may contain viruses. Before opening attachments please check them > for viruses and defects. While MindTree Limited (MindTree) has put in place > checks to minimize the risks, MindTree will not be responsible for any > viruses or defects or any forwarded attachments emanating either from within > MindTree or outside. > Please note that e-mails are susceptible to change and MindTree shall not > be liable for any improper, untimely or incomplete transmission. > MindTree reserves the right to monitor and review the content of all > messages sent to or from MindTree e-mail address. Messages sent to or from > this e-mail address may be stored on the MindTree e-mail system or else > where. > -- Alexander Ramos Jardim
Re: Extending XmlRequestHandler
Just continuing on my quest. How difficult it would be to make a RequestHandler that understands a given soap request? 2008/5/9 Alexander Ramos Jardim <[EMAIL PROTECTED]>: > Thanks, > > > To maybe save you from reinventing the wheel, when I asked a similar > > question a couple weeks back, hossman pointed me towards SOLR-285 and > > SOLR-370. 285 does XSLT, 270 does STX. > > > > But sorry, can you point me to the version? I am not acostumed with > version control. > > > -- > Alexander Ramos Jardim -- Alexander Ramos Jardim
RE: Solr hardware specs
thanks all, they were all valuable information . btw: is there any e-book on Solr ? many thanks, ak > Date: Fri, 9 May 2008 13:45:05 -0700 > Subject: Re: Solr hardware specs > From: [EMAIL PROTECTED] > To: solr-user@lucene.apache.org > > And use a log of real queries, captured from your website or one > like it. Query statistics are not uniform. > > wunder > > On 5/9/08 6:20 AM, "Erick Erickson" wrote: > >> This still isn't very helpful. How big are the docs? How many fields do you >> expect to index? What is your expected query rate? >> >> You can get away with an old laptop if your docs are, say, 5K each and you >> only >> expect to query it once a day and have one text field. >> >> If each doc is 10M, you're indexing 250 fields and your expected >> query rate is 100/sec, you need some serious hardware. >> >> But 400K docs isn't very big by SOLR standards in terms of the number >> of docs. >> >> What I'd really recommend is that you just take an existing machine, create >> an >> index on it and measure. Be aware that the first few queries will be much >> slower >> than subsequent queries, so throw out the first few queries from your >> timings. >> >> Best >> Erick >> >> On Fri, May 9, 2008 at 8:27 AM, dudes dudes wrote: >> >>> >>> HI Nick, >>> >>> I'm quite new to solr, so excuse my ignorance for any solr related settings >>> :). >>> We think that would have up to 400K docs in a loady environment. We surely >>> don't want to have solr to be publicly >>> accessible ( Just for the internal use). We are not sure if we could have 2 >>> network interfaces will help the speed issues that we have >>> due to side location of the company. >>> >>> thanks, >>> ak >>> Date: Sat, 10 May 2008 00:13:49 +1200 From: [EMAIL PROTECTED] To: solr-user@lucene.apache.org Subject: Re: Solr hardware specs Hi It all depends on the load your server is under, how many documents you have etc. -- I am not sure what you mean by network connectivity -- solr really should not be run on a publicly accessible IP address. Can you provide some more info on the setup? -Nick On 5/10/08, dudes dudes wrote: > > Hello, > > Can someone kindly advice me on hardware specs (CPU/HHD/RAM) to install >>> solr on a production server ? We are planning to have it > on Debian. Also what network connectivities does it require (incoming >>> and outgoing)? > > Thanks fr your time. > ak > _ > Great deals on almost anything at eBay.co.uk. Search, bid, find and >>> win on eBay today! > http://clk.atdmt.com/UKM/go/msnnkmgl001004ukm/direct/01/ >>> >>> _ >>> Be a Hero and Win with Iron Man >>> http://clk.atdmt.com/UKM/go/msnnkmgl001009ukm/direct/01/ > _ Be a Hero and Win with Iron Man http://clk.atdmt.com/UKM/go/msnnkmgl001009ukm/direct/01/
Re: MultiLingual Search
On Mon, 12 May 2008 16:16:28 +0530 "Sachit P. Menon" <[EMAIL PROTECTED]> wrote: > My project requires having the same content (mostly) in multiple languages. hi Sachit, please search the archives of the list. this topic seems to come up twice a week or thereabouts :) You are of course encouraged to ask back when you have specific questions that haven't been answered already. Everyone - time for MultiLingual FAQ @ Wiki ? B _ {Beto|Norberto|Numard} Meijome "Lots of people who complained about us receiving the MBE received theirs for heroism in the war -- for killing people. We received ours for entertaining other people. I'd say we deserve ours more." John Lennon I speak for myself, not my employer. Contents may be hot. Slippery when wet. Reading disclaimers makes you go blind. Writing them is worse. You have been Warned.
Re: Extending XmlRequestHandler
On May 12, 2008, at 8:31 AM, Alexander Ramos Jardim wrote: How difficult it would be to make a RequestHandler that understands a given soap request? I would implement this at the servlet API layer (or rather some SOAP toolkit, like Axis)... and wire in to Solr's API there. Erik
Re: Extending XmlRequestHandler
So, Wouldn't you use a SoapRequestHandler? Would you use SolrJ to make the wiring? Or would you put the SOAP on the solr server side? 2008/5/12 Erik Hatcher <[EMAIL PROTECTED]>: > > On May 12, 2008, at 8:31 AM, Alexander Ramos Jardim wrote: > > > How difficult it would be to make a RequestHandler that understands a > > given > > soap request? > > > > I would implement this at the servlet API layer (or rather some SOAP > toolkit, like Axis)... and wire in to Solr's API there. > >Erik > > -- Alexander Ramos Jardim
Re: Extending XmlRequestHandler
On May 12, 2008, at 9:18 AM, Alexander Ramos Jardim wrote: Wouldn't you use a SoapRequestHandler? First of all, *I* wouldn't really want to be caught coding up any kind of client or server SOAP call to Solr. Seems mostly ridiculous to me when Solr's response is malleable to practically any kind of hash/array data structure format you want already. But no, I wouldn't build a request handler for SOAPifying Solr, I don't think. I'd go as simple as possible and use the Axis web stuff (last time I did this stuff was ages ago, caveat) and talk to Solr's API - which could be a call to a request handler internally. Would you use SolrJ to make the wiring? No. SolrJ is for Java -> Solr communications, and it does just fine with out any SOAP in there at all. In fact, the latest incarnations do this with serialized Java objects of some sort making the data smaller and faster to process than XML. Or would you put the SOAP on the solr server side? Only on the server-side, if you're a masochist. Otherwise, just use Solr without SOAP and do something else with your free time :) Erik
Re: How Special Character '&' used in indexing
Hi Mike, Thanx for your reply. I have got the answer to the question posted. I know people are donating time here. ASAP doesnt mean that am demanding them to reply fast. Please read the lines before you comment something(*Please kindly* reply ASAP). Am a newbie and with curiosity i have requested to answer. I dont know if it has hurt you(Am sorry for that) Thanks, Ricky. On Fri, May 9, 2008 at 3:30 PM, Mike Klaas <[EMAIL PROTECTED]> wrote: > > On 9-May-08, at 6:26 AM, Ricky wrote: > > I have tried sending the '&' instead of '&' like the following, > > A & K Inc. > > > > But i still get the same error ""entity reference name can not contain > > character ' A & > > .. > > > > Please use a library for doing xml encoding--there is absolutely no reason > to do this yourself. > > Please kindly reply ASAP. > > > > Please also realize that people responding here are donating their time > and that it is inappropriate to ask for an expedited response. > > -Mike > >
Re: Extending XmlRequestHandler
Erik, Thanks for the comments. But they raised some doubts in my mind. What I need to do is to integrate Solr to an environment that communicates via wsdl/SOAP. There will be lots of web services communicating to Solr. Solr will be used like a web service, so I need to make possible for the other side to send a wsdl based message to my web service, and my service to communicate with Solr. Expected message volume is 60.000 messages per hour, peaks of 100.000 messages per hour. And it will scale in months. That's why I am trying to figure the most performance wise way to things. I understood what you said about putting the SOAP at Solr. I agree. That's not smart. Now, I am thinking about the web service talking with an embedded Solr server. Is that you were talking about? 2008/5/12 Erik Hatcher <[EMAIL PROTECTED]>: > On May 12, 2008, at 9:18 AM, Alexander Ramos Jardim wrote: > > > Wouldn't you use a SoapRequestHandler? > > > > First of all, *I* wouldn't really want to be caught coding up any kind of > client or server SOAP call to Solr. Seems mostly ridiculous to me when > Solr's response is malleable to practically any kind of hash/array data > structure format you want already. > > But no, I wouldn't build a request handler for SOAPifying Solr, I don't > think. I'd go as simple as possible and use the Axis web stuff (last time I > did this stuff was ages ago, caveat) and talk to Solr's API - which could be > a call to a request handler internally. > > Would you use SolrJ to make the > > wiring? > > > > No. SolrJ is for Java -> Solr communications, and it does just fine with > out any SOAP in there at all. In fact, the latest incarnations do this with > serialized Java objects of some sort making the data smaller and faster to > process than XML. > > Or would you put the SOAP on the solr server side? > > > > Only on the server-side, if you're a masochist. Otherwise, just use Solr > without SOAP and do something else with your free time :) > >Erik > > -- Alexander Ramos Jardim
Re: Extending XmlRequestHandler
Performance wise, it would be best for your web services to communicate to Solr using SolrJ. I'm sure it would be better performance-wise than SOAP and you won't need to do anything custom with Solr. If you're using, Solr 1.3, you can have a *huge* performance boost by using the BinaryResponseParser in SolrJ. On Mon, May 12, 2008 at 7:22 PM, Alexander Ramos Jardim < [EMAIL PROTECTED]> wrote: > Erik, > > Thanks for the comments. But they raised some doubts in my mind. > > What I need to do is to integrate Solr to an environment that communicates > via wsdl/SOAP. There will be lots of web services communicating to Solr. > Solr will be used like a web service, so I need to make possible for the > other side to send a wsdl based message to my web service, and my service > to > communicate with Solr. > > Expected message volume is 60.000 messages per hour, peaks of 100.000 > messages per hour. And it will scale in months. That's why I am trying to > figure the most performance wise way to things. > > I understood what you said about putting the SOAP at Solr. I agree. That's > not smart. > Now, I am thinking about the web service talking with an embedded Solr > server. > Is that you were talking about? > > > 2008/5/12 Erik Hatcher <[EMAIL PROTECTED]>: > > > On May 12, 2008, at 9:18 AM, Alexander Ramos Jardim wrote: > > > > > Wouldn't you use a SoapRequestHandler? > > > > > > > First of all, *I* wouldn't really want to be caught coding up any kind > of > > client or server SOAP call to Solr. Seems mostly ridiculous to me when > > Solr's response is malleable to practically any kind of hash/array data > > structure format you want already. > > > > But no, I wouldn't build a request handler for SOAPifying Solr, I don't > > think. I'd go as simple as possible and use the Axis web stuff (last > time I > > did this stuff was ages ago, caveat) and talk to Solr's API - which > could be > > a call to a request handler internally. > > > > Would you use SolrJ to make the > > > wiring? > > > > > > > No. SolrJ is for Java -> Solr communications, and it does just fine > with > > out any SOAP in there at all. In fact, the latest incarnations do this > with > > serialized Java objects of some sort making the data smaller and faster > to > > process than XML. > > > > Or would you put the SOAP on the solr server side? > > > > > > > Only on the server-side, if you're a masochist. Otherwise, just use > Solr > > without SOAP and do something else with your free time :) > > > >Erik > > > > > > > -- > Alexander Ramos Jardim > -- Regards, Shalin Shekhar Mangar.
Re: Extending XmlRequestHandler
On May 12, 2008, at 9:52 AM, Alexander Ramos Jardim wrote: I understood what you said about putting the SOAP at Solr. I agree. That's not smart. Now, I am thinking about the web service talking with an embedded Solr server. Is that you were talking about? Quite pleasantly you don't even really have to code in that level of detail in any hardcoded way. You can use SolrJ behind a SOAP interface, and use it with a SolrServer. The implementation of that can switch between "embedded" (which I'm not even really sure what that means exactly) or via HTTP the good ol' fashioned way. Erik
Re: Extending XmlRequestHandler
Nice. I will try that with Solr-1.3 as Shalin suggests. 2008/5/12 Erik Hatcher <[EMAIL PROTECTED]>: > > On May 12, 2008, at 9:52 AM, Alexander Ramos Jardim wrote: > > > I understood what you said about putting the SOAP at Solr. I agree. > > That's > > not smart. > > Now, I am thinking about the web service talking with an embedded Solr > > server. > > Is that you were talking about? > > > > Quite pleasantly you don't even really have to code in that level of > detail in any hardcoded way. You can use SolrJ behind a SOAP interface, and > use it with a SolrServer. The implementation of that can switch between > "embedded" (which I'm not even really sure what that means exactly) or via > HTTP the good ol' fashioned way. > >Erik > > > > -- Alexander Ramos Jardim
Re: Missing content Stream
Hi Hoss, * 1) Posting the exact same question twice because you didn't get a reply in the first 8 hours isn't going to encourage people to reply faster. best case scenerio: you waste people's time they could be spending reading another email; worst case scnerio: you irk people and put them in a bad mood so they don't feel like being helpful. *Sorry, i did not mean to Spam or irk people. I will follow your advice from now,as it sounds reasonable.* * * 2) In both instances of your question, the curl command you listed didn't include a space between the URL and the --data-binary option, perhaps that's your problem? *There is a space between the URL and the --data-binary option * 3) what *exactly* do you see in your shell when you run hte command? you said the lines from books.csv get dumped to your console, but what appears above it? what appears below it? ... books.csv is only 10 lines, just paste it all from the line where you run the command to the next prompt you see. * *Here is what i get when i run the curl command,* curl http://localhost:8983/solr/update/csv --data-binar y @books.csv -H 'Content-type:text/plain; charset=utf-8' Error 400 HTTP ERROR: 400missing content stream RequestURI=/solr/update/csvhttp://jetty.mortbay.org /">Powered by Jetty:// *Here is what i get on the console,* May 12, 2008 9:57:20 AM org.apache.solr.core.SolrException log SEVERE: org.apache.solr.core.SolrException: missing content stream at org.apache.solr.handler.CSVRequestHandler.handleRequestBody(CSVReques tHandler.java:50) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandl erBase.java:77) at org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handle Request(RequestHandlers.java:243) at org.apache.solr.core.SolrCore.execute(SolrCore.java:658) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter .java:191) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilte r.java:159) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(Servlet Handler.java:1089) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:3 65) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.jav a:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:1 81) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:7 12) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHand lerCollection.java:211) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection. java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:1 39) at org.mortbay.jetty.Server.handle(Server.java:285) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:50 2) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnectio n.java:835) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:641) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:208) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378) at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector. java:226) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool .java:442) May 12, 2008 9:57:20 AM org.apache.solr.core.SolrCore execute INFO: /update/csv id,cat,name,price,inStock,author_t,series_t,sequence_i,genre_s 0553573403,book,A Game of Thrones,7.99,true,George R.R. Martin,"A Song of Ice an d Fire",1,fantasy 0553579908,book,A Clash of Kings,7.99,true,George R.R. Martin,"A Song of Ice and Fire",2,fantasy 055357342X,book,A Storm of Swords,7.99,true,George R.R. Martin,"A Song of Ice an d Fire",3,fantasy 0553293354,book,Foundation,7.99,true,Isaac Asimov,Foundation Novels,1,scifi 0812521390,book,The Black Company,6.99,false,Glen Cook,The Chronicles of The Bla ck Company,1,fantasy 0812550706,book,Ender's Game,6.99,true,Orson Scott Card,Ender,1,scifi 0441385532,book,Jhereg,7.95,false,Steven Brust,Vlad Taltos,1,fantasy 0380014300,book,Nine Princes In Amber,6.99,true,Roger Zelazny,the Chronicles of Amber,1,fantasy 0805080481,book,The Book of Three,5.99,true,Lloyd Alexander,The Chronicles of Pr ydain,1,fantasy 080508049X,book,The Black Cauldron,5.99,true,Lloyd Alexander,The Chronicles of P rydain,2,fantasy = 0 15 *Your reply will be highly appreciated. * Thanks, Ricky. On Sat, May 10, 2008 at 2:04 PM, Chris Hostetter <[EMAIL PROTECTED]> wrote: > > 1) Posting the exact same question twice because you didn't get a reply in > the first 8 hours isn't going to encourage people to reply faster. best > case scenerio: you waste people's time they could be spending read
Re: Multicore and SolrResourceLoader
On May 10, 2008, at 1:03 PM, Chris Hostetter wrote: : I've been digging around in multicore and I am curious as to how to force a : reload of the sharedLib classloader. I can reload a given core, which : instantiates a new SolrResourceLoader for that core, but I want to be able to : reload the classloader for the sharedLib. that seems really dangerous to me ... you could wind up changing class impls out from under a core ... which could give you serious incompatibilities. The only safe way i can imagine doing this would be if we add a way to compeltely reinitialize the MultiCore (which would reload all the SolrCores) That makes sense. I'm pretty sure in my scenario that I would want all cores to have access to the new library, so I guess it does make sense to re-instantiate the Multicore.
Re: help for preprocessing the query
On Mon, May 12, 2008 at 2:50 PM, Koji Sekiguchi <[EMAIL PROTECTED]> wrote: > Hi Umar, > > You may be able to preprocess your request parameter in your > servlet filter. In the doFilter() method, you do: > > ServletRequest myRequest = new MyServletRequestWrapper( request ); Thanks for your response, Where is the ServletRequest class , I am using Solr 1.3 trunk code found SolrServletm, butit is depricated, which class can I use instead of SolrRequest in 1.3 codebase? I also tried overloading Standard request handler , How do I re write queryparams there? Can you point me to some documentation? > : > chain.doFilter( myRequest, response ); > > And you have MyServletRequestWrapper that extends ServletRequestWrapper. > Then you can get|set q* parameters through getParameter() method. > > Hope this helps, > > Koji > > > > Umar Shah wrote: > > > Hi, > > > > Due some requirement I need to transform the user queries before passing > > it > > to the standard handler in Solr, can anyone suggest me the best way to > > do > > this. > > > > I will need to use a transfomation class that would provide functions to > > process the input query 'qIn' and transform it to the resultant query > > 'qOut' > > and then pass it to solr handler as if qOut were the original user > > query. > > > > thanks in anticipation, > > -umar > > > > > > > >
Re: help for preprocessing the query
ServletRequest and ServletRequestWrapper are part of the Java servlet-api (not Solr). Basically, Koji is hinting at writing a ServletFilter implementation (again using servlet-api) and creating a wrapper ServletRequest which modifies the underlying request params which can then be used by Solr. On Mon, May 12, 2008 at 8:36 PM, Umar Shah <[EMAIL PROTECTED]> wrote: > On Mon, May 12, 2008 at 2:50 PM, Koji Sekiguchi <[EMAIL PROTECTED]> > wrote: > > > Hi Umar, > > > > You may be able to preprocess your request parameter in your > > servlet filter. In the doFilter() method, you do: > > > > ServletRequest myRequest = new MyServletRequestWrapper( request ); > > > Thanks for your response, > > Where is the ServletRequest class , I am using Solr 1.3 trunk code > found SolrServletm, butit is depricated, which class can I use instead of > SolrRequest in 1.3 codebase? > > > I also tried overloading Standard request handler , How do I re write > queryparams there? > > Can you point me to some documentation? > > > > : > > chain.doFilter( myRequest, response ); > > > > And you have MyServletRequestWrapper that extends ServletRequestWrapper. > > Then you can get|set q* parameters through getParameter() method. > > > > Hope this helps, > > > > Koji > > > > > > > > Umar Shah wrote: > > > > > Hi, > > > > > > Due some requirement I need to transform the user queries before > passing > > > it > > > to the standard handler in Solr, can anyone suggest me the best way > to > > > do > > > this. > > > > > > I will need to use a transfomation class that would provide functions > to > > > process the input query 'qIn' and transform it to the resultant query > > > 'qOut' > > > and then pass it to solr handler as if qOut were the original user > > > query. > > > > > > thanks in anticipation, > > > -umar > > > > > > > > > > > > > > -- Regards, Shalin Shekhar Mangar.
Re: help for preprocessing the query
On Mon, May 12, 2008 at 8:42 PM, Shalin Shekhar Mangar < [EMAIL PROTECTED]> wrote: > ServletRequest and ServletRequestWrapper are part of the Java servlet-api > (not Solr). Basically, Koji is hinting at writing a ServletFilter > implementation (again using servlet-api) and creating a wrapper > ServletRequest which modifies the underlying request params which can then > be used by Solr. > sorry for the silly question, basically i am new to servlets. Now If my understanding is right , I will need to create a servlet/wrapper that would listen the user facing queries and then pass the processed text to solr request handler and I need to pack this servlet class file into Solr war file. But How would I ensure that my servlet is called instead of solr request handler? > On Mon, May 12, 2008 at 8:36 PM, Umar Shah <[EMAIL PROTECTED]> wrote: > > > On Mon, May 12, 2008 at 2:50 PM, Koji Sekiguchi <[EMAIL PROTECTED]> > > wrote: > > > > > Hi Umar, > > > > > > You may be able to preprocess your request parameter in your > > > servlet filter. In the doFilter() method, you do: > > > > > > ServletRequest myRequest = new MyServletRequestWrapper( request ); > > > > > > Thanks for your response, > > > > Where is the ServletRequest class , I am using Solr 1.3 trunk code > > found SolrServletm, butit is depricated, which class can I use instead > of > > SolrRequest in 1.3 codebase? > > > > > > I also tried overloading Standard request handler , How do I re write > > queryparams there? > > > > Can you point me to some documentation? > > > > > > > : > > > chain.doFilter( myRequest, response ); > > > > > > And you have MyServletRequestWrapper that extends > ServletRequestWrapper. > > > Then you can get|set q* parameters through getParameter() method. > > > > > > Hope this helps, > > > > > > Koji > > > > > > > > > > > > Umar Shah wrote: > > > > > > > Hi, > > > > > > > > Due some requirement I need to transform the user queries before > > passing > > > > it > > > > to the standard handler in Solr, can anyone suggest me the best way > > to > > > > do > > > > this. > > > > > > > > I will need to use a transfomation class that would provide > functions > > to > > > > process the input query 'qIn' and transform it to the resultant > query > > > > 'qOut' > > > > and then pass it to solr handler as if qOut were the original user > > > > query. > > > > > > > > thanks in anticipation, > > > > -umar > > > > > > > > > > > > > > > > > > > > > > > > -- > Regards, > Shalin Shekhar Mangar. >
Re: help for preprocessing the query
Shalin Shekhar Mangar write: ServletRequest and ServletRequestWrapper are part of the Java servlet-api (not Solr). Basically, Koji is hinting at writing a ServletFilter implementation (again using servlet-api) and creating a wrapper ServletRequest which modifies the underlying request params which can then be used by Solr. Right. Koji
Re: help for preprocessing the query
I haven't written one, but I _think_ you could just implement a QParser that does the transformation. See the LuceneQParser or the DismaxQParser. On May 12, 2008, at 4:59 AM, Umar Shah wrote: Hi, Due some requirement I need to transform the user queries before passing it to the standard handler in Solr, can anyone suggest me the best way to do this. I will need to use a transfomation class that would provide functions to process the input query 'qIn' and transform it to the resultant query 'qOut' and then pass it to solr handler as if qOut were the original user query. thanks in anticipation, -umar
result limit / diversity with an OR query
Hi,I have a query similar to: x OR y OR z and i want to know if there is a way to make sure i get 1 result with x, 1 result with y and one with z ? Alternatively, is it possible to achieve through facets? Thanks, S.
Re: result limit / diversity with an OR query
the easy answer is: x AND y AND z . This will return ALL the documents containing x,y and z. But if you want also get the documents containin AT LEAST one of the three, try this: (x AND y AND z)^10 OR (x OR y OR z) (the idea is boosting the AND query) this way, the documents that "x and y and z" exists, they will be the firsts Pako s d wrote: Hi,I have a query similar to: x OR y OR z and i want to know if there is a way to make sure i get 1 result with x, 1 result with y and one with z ? Alternatively, is it possible to achieve through facets? Thanks, S.
Re: help for preprocessing the query
You'll *not* write a servlet. You'll write implement the Filter interface http://java.sun.com/j2ee/sdk_1.3/techdocs/api/javax/servlet/Filter.html In the doFilter method, you'll create a ServletRequestWrapper which changes the incoming param. Then you'll call chain.doFilter with the new request object. You'll need to add this filter before the SolrRequestFilter in Solr's web.xml Look at http://www.onjava.com/pub/a/onjava/2001/05/10/servlet_filters.html?page=1for more details. On Mon, May 12, 2008 at 8:51 PM, Umar Shah <[EMAIL PROTECTED]> wrote: > On Mon, May 12, 2008 at 8:42 PM, Shalin Shekhar Mangar < > [EMAIL PROTECTED]> wrote: > > > ServletRequest and ServletRequestWrapper are part of the Java > servlet-api > > (not Solr). Basically, Koji is hinting at writing a ServletFilter > > implementation (again using servlet-api) and creating a wrapper > > ServletRequest which modifies the underlying request params which can > then > > be used by Solr. > > > > sorry for the silly question, basically i am new to servlets. > Now If my understanding is right , I will need to create a servlet/wrapper > that would listen the user facing queries and then pass the processed text > to solr request handler and I need to pack this servlet class file into > Solr > war file. > > But How would I ensure that my servlet is called instead of solr request > handler? > > > > On Mon, May 12, 2008 at 8:36 PM, Umar Shah <[EMAIL PROTECTED]> wrote: > > > > > On Mon, May 12, 2008 at 2:50 PM, Koji Sekiguchi <[EMAIL PROTECTED]> > > > wrote: > > > > > > > Hi Umar, > > > > > > > > You may be able to preprocess your request parameter in your > > > > servlet filter. In the doFilter() method, you do: > > > > > > > > ServletRequest myRequest = new MyServletRequestWrapper( request ); > > > > > > > > > Thanks for your response, > > > > > > Where is the ServletRequest class , I am using Solr 1.3 trunk code > > > found SolrServletm, butit is depricated, which class can I use instead > > of > > > SolrRequest in 1.3 codebase? > > > > > > > > > I also tried overloading Standard request handler , How do I re write > > > queryparams there? > > > > > > Can you point me to some documentation? > > > > > > > > > > : > > > > chain.doFilter( myRequest, response ); > > > > > > > > And you have MyServletRequestWrapper that extends > > ServletRequestWrapper. > > > > Then you can get|set q* parameters through getParameter() method. > > > > > > > > Hope this helps, > > > > > > > > Koji > > > > > > > > > > > > > > > > Umar Shah wrote: > > > > > > > > > Hi, > > > > > > > > > > Due some requirement I need to transform the user queries before > > > passing > > > > > it > > > > > to the standard handler in Solr, can anyone suggest me the best > way > > > to > > > > > do > > > > > this. > > > > > > > > > > I will need to use a transfomation class that would provide > > functions > > > to > > > > > process the input query 'qIn' and transform it to the resultant > > query > > > > > 'qOut' > > > > > and then pass it to solr handler as if qOut were the original user > > > > > query. > > > > > > > > > > thanks in anticipation, > > > > > -umar > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > Regards, > > Shalin Shekhar Mangar. > > > -- Regards, Shalin Shekhar Mangar.
Re: exceeded limit of maxWarmingSearchers
Thanks for the advice. Unfortunately, my plan was to two have two instances both running as "masters" although one would only be a warm-standby for querying purposes. I just wanted a little bit of redundancy for the moment and I though a true master-slave setup would be overkill. Is it really problematic to run queries on instances that aren't auto-warmed? Sounds like I'm stuck between a rock and a hard-place. Am I going to have to build my initial index w/ one configuration and then re-start with a different configuration? I'd prefer to avoid that. If I can get the auto-warming issue straightened out. I *should* be OK w/ a fairly conservative commit strategy (either auto-commit every fairly large # of docs, or do the same programmatically). Does this sound right? On 5/10/08, Chris Hostetter <[EMAIL PROTECTED]> wrote: > > > : On a solr instance where I am in the process of indexing moderately > large > > : number of documents (300K+). There is no querying of the index taking > place > : at all. > : I don't understand what operations are causing new searchers to warm, or > how > : to stop them from doing so. I'd be happy to provide more details of my > : configuration if necessary, I've made very few changes to the > solrconfig.xml > : that comes with the sample application. > > > the one aspect that i didn't see mentioned in this thread so far is cache > autowarming. > > even if no querying is going on while you are doing all the indexing, if > some querying took place at any point, and your caches have someentries in > them. every commit will cause autowarming of caches to happen (according > to the autowarm settings on each cache) which result in queries getting > executed on your "warming" searcher, and those queries keep cascading on > to the subsequent warming searchers. > > this is one of the reasosn why it's generlaly a good idea to have the > cache sizes on your "master" boxes all have autowarm counts of "0". you > can still use the caches in case you do inadvertantly hit your master (you > don't want it to fall over and die) but you don't want to waste a lot of > time warming them on every commit until the end of time. > > -Hoss > >
Re: Unlimited number of return documents?
Hi all, one possible use case could be to synchronize the index against a given database. E.g., assume that you have a filesystem that is indexed periodically. If files are deleted on this filesystem, they will not be deleted in the index. This way, you can get (e.g.) the complete content from your index in order to check for consistency. Btw: I also played around with the rows parameter in order to get the overall index; but I got exceptions ("not sufficient heap space"), when setting up rows above some higher thresholds. Regards, marc Erik Hatcher schrieb: Or make two requests... one with rows=0 to see how many documents match without retrieving any, then another with that amount specified. Erik On May 9, 2008, at 8:54 AM, Francisco Sanmartin wrote: Yeah, I understand the possible problems of changing this value. It's just a very particular case and there won't be a lot of documents to return. I guess I'll have to use a very high int number, I just wanted to know if there was any "proper" configuration for this situation. Thanks for the answer! Pako Otis Gospodnetic wrote: Will something a la rows= work? ;) But are you sure you want to do that? It could be sloow. Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message From: Francisco Sanmartin <[EMAIL PROTECTED]> To: solr-user@lucene.apache.org Sent: Thursday, May 8, 2008 4:18:46 PM Subject: Unlimited number of return documents? What is the value to set to "rows" in solrconfig.xml in order not to have any limitation about the number of returned documents? I've tried with "-1" and "0" but not luck... solr 0 name="rows">*10* I want solr to return all available documents by default. Thanks! Pako
Re: Loading performance slowdown at ~ 400K documents
Glad to hear it. Incidentally, lowering maxBufferedDocs will reduce peak memory consumption during indexing, at a cost of slower indexing throughput. -Mike On 11-May-08, at 3:41 AM, Tracy Flynn wrote: Thanks for the replies. For a completely different reason, I happened to look at the memory stats for all processes including the SOLR instances. Noticed that the SLOW Solr instance was maxing out with more virtual memory than allocated. After boosting the maximum heap space and restarting, everything started to run at 4x-5x the speed before the fix - and at the rate I reasonably thought it should. Tracy On May 9, 2008, at 8:02 AM, Tracy Flynn wrote: Hi, I'm starting to see significant slowdown in loading performance after I have loaded about 400K documents. I go from a load rate of near 40 docs/sec to 20- 25 docs a second. Am I correct in assuming that, during indexing operations, Lucene/ SOLR tries to hold as much of the indexex in memory as possible? If so, does the slowdown indicate need to increase JVM heap space? Any ideas / help would be appreciated Regards, Tracy - Details Documents loaded as XML via POST command in batches of 1000, commit after each batch Total current documents ~ 450,000 Avg document size: 4KB One indexed text field contains 3KB or so. (body field below - standard type 'text') Dual XEON 3 GHZ 4 GB memory SOLR JVM Startup options java -Xms256m -Xmx1000m -jar start.jar Relevant portion of the schema follows stored="true" required="true"/> required="false"/> required="false"/> stored="true" required="false" default="0"/> stored="true" required="true"/> required="false"/> required="false" compressed="true"/> required="false"/> stored="true" required="false" default="0"/> required="false"/> required="false" default="0"/> stored="true" required="false" default="0"/> required="false" default="0"/> required="false"/> required="false"/> stored="true" required="false" multiValued="true"/> required="false" default="0"/> stored="true" required="false" default="0"/> stored="true" required="false"/> stored="true" required="false" multiValued="true"/> stored="true" required="false"/> required="false" default="0"/> stored="true" required="false"/> stored="true" required="false" default="0"/> stored="true" required="false"/> indexed="true" stored="true" required="false"/> indexed="true" stored="true" required="false"/> stored="true" required="false" />
Re: result limit / diversity with an OR query
On 12-May-08, at 9:31 AM, s d wrote: Hi,I have a query similar to: x OR y OR z and i want to know if there is a way to make sure i get 1 result with x, 1 result with y and one with z ? The easiest way is to execute three separate queries: +x y z x +y z x y +z -Mike
Re: Unlimited number of return documents?
Hi Marc, Think about how one would go about implementing a manual database table to table synchronization. It may not be a good idea to iterate all rows from the target database and checking for existence in the source database to remove rows which were deleted in the source table. The best way to implement this is through a transaction log (which is exactly how MySql replication works). A similar approach is used in Solr's DataImportHandler ( http://wiki.apache.org/solr/DataImportHandler) to sync databases to Solr where you must maintain a table to track deletes. This table can be used to delete documents from Solr. On Fri, May 9, 2008 at 11:41 PM, Marc Bechler <[EMAIL PROTECTED]> wrote: > Hi all, > > one possible use case could be to synchronize the index against a given > database. E.g., assume that you have a filesystem that is indexed > periodically. If files are deleted on this filesystem, they will not be > deleted in the index. This way, you can get (e.g.) the complete content from > your index in order to check for consistency. > > Btw: I also played around with the rows parameter in order to get the > overall index; but I got exceptions ("not sufficient heap space"), when > setting up rows above some higher thresholds. > > Regards, > > marc > > > Erik Hatcher schrieb: > > Or make two requests... one with rows=0 to see how many documents match > > without retrieving any, then another with that amount specified. > > > >Erik > > > > > > On May 9, 2008, at 8:54 AM, Francisco Sanmartin wrote: > > > > > Yeah, I understand the possible problems of changing this value. It's > > > just a very particular case and there won't be a lot of documents to > > > return. > > > I guess I'll have to use a very high int number, I just wanted to know if > > > there was any "proper" configuration for this situation. > > > > > > Thanks for the answer! > > > > > > Pako > > > > > > > > > Otis Gospodnetic wrote: > > > > > > > Will something a la rows= work? ;) But are you sure > > > > you want to do that? It could be sloow. > > > > > > > > > > > > Otis > > > > -- > > > > Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch > > > > > > > > > > > > - Original Message > > > > > > > > From: Francisco Sanmartin <[EMAIL PROTECTED]> > > > > > To: solr-user@lucene.apache.org > > > > > Sent: Thursday, May 8, 2008 4:18:46 PM > > > > > Subject: Unlimited number of return documents? > > > > > > > > > > What is the value to set to "rows" in solrconfig.xml in order not > > > > > to have any limitation about the number of returned documents? I've > > > > > tried > > > > > with "-1" and "0" but not luck... > > > > > > > > > > solr 0 name="rows">*10* > > > > > I want solr to return all available documents by default. > > > > > > > > > > Thanks! > > > > > > > > > > Pako > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- Regards, Shalin Shekhar Mangar.
single character terms in index - why?
I'm experienced with Lucene, less so than SOLR. I am looking at two systems built on top of SOLR for a library discovery service: blacklight and vufind. I checked the raw lucene index using Luke and noticed that both of these indexes have single character terms in the index, such as "d" or "f". I asked about this on the vufind list, and was told I didn't understand SOLR and why it would need these. So I'm now asking: why would SOLR want single character terms? "a" is usually a stopword. I know the Library MARC data from which the index is derived has a lot of these characters because they denote subfields in the data. But why would we want them to be searchable? Naomi Dushay [EMAIL PROTECTED]
Re: Unlimited number of return documents?
Just to know it, what where the thresholds were u got the exception? ( I want to know the order of magnitude, i know it depends on the machine and the config, but is just to know have an approximate idea). Thanks Pako Marc Bechler wrote: Hi all, one possible use case could be to synchronize the index against a given database. E.g., assume that you have a filesystem that is indexed periodically. If files are deleted on this filesystem, they will not be deleted in the index. This way, you can get (e.g.) the complete content from your index in order to check for consistency. Btw: I also played around with the rows parameter in order to get the overall index; but I got exceptions ("not sufficient heap space"), when setting up rows above some higher thresholds. Regards, marc Erik Hatcher schrieb: Or make two requests... one with rows=0 to see how many documents match without retrieving any, then another with that amount specified. Erik On May 9, 2008, at 8:54 AM, Francisco Sanmartin wrote: Yeah, I understand the possible problems of changing this value. It's just a very particular case and there won't be a lot of documents to return. I guess I'll have to use a very high int number, I just wanted to know if there was any "proper" configuration for this situation. Thanks for the answer! Pako Otis Gospodnetic wrote: Will something a la rows= work? ;) But are you sure you want to do that? It could be sloow. Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message From: Francisco Sanmartin <[EMAIL PROTECTED]> To: solr-user@lucene.apache.org Sent: Thursday, May 8, 2008 4:18:46 PM Subject: Unlimited number of return documents? What is the value to set to "rows" in solrconfig.xml in order not to have any limitation about the number of returned documents? I've tried with "-1" and "0" but not luck... solr 0 name="rows">*10* I want solr to return all available documents by default. Thanks! Pako
Re: single character terms in index - why?
On Mon, May 12, 2008 at 4:13 PM, Naomi Dushay <[EMAIL PROTECTED]> wrote: > So I'm now asking: why would SOLR want single character terms? Solr, like Lucene, can be configured however you want. The example schema is just that - an example. But, there are many field types that might be interested in keeping single letter terms. One can even think of examples where single letter terms would be useful for normal full-text fields, depending on the domain or on the analysys. One simple example: "d-day" might be alternately indexed as "d" "day" so it would be found with a query of "d day" -Yonik
JMX monitoring
Hi, I'm new to Solr and I've been attempting to get JMX monitoring working. I can get simple information by using the - Dcom.sun.management.jmxremote command line switch, but I'd like to get more useful statistics. I've been working on applying the SOLR-256 and jmx patch, but the original revisions are pretty old and I'm having to spend a lot of time wandering through the source. Is there a better solution to getting this working or a newer version of the patch? Thank you, Marshall
indexing pdf documents
Hello, Before making a little program to extract the txt from my pdfs and feed it into solr with xml, I just wanted to check if solr has capability to digest pdf files apart from xml? Best Regards, -C.B.
Re: indexing pdf documents
Solr does not have this support built in, but there's a patch for it: https://issues.apache.org/jira/browse/SOLR-284 On Mon, May 12, 2008 at 2:02 PM, Cam Bazz <[EMAIL PROTECTED]> wrote: > Hello, > > Before making a little program to extract the txt from my pdfs and feed it > into solr with xml, I just wanted to check if solr has capability to digest > pdf files apart from xml? > > Best Regards, > -C.B. >
Re: AND vs. OR query performance
In general, AND will perform better than OR (because of skipping in the scorers). But if the number of documents matching the AND is close to that matching the OR query, then skipping doesn't gain you much and probably has a little more overhead. -Yonik On Sun, May 11, 2008 at 4:04 AM, Lars Kotthoff <[EMAIL PROTECTED]> wrote: > Dear list, > > during some performance experiments I have found that queries with ORed > search > terms are significantly faster than queries with ANDed search terms, > everything > else being equal. > > Does anybody know whether this is the generally expected behaviour? > > Thanks, > > Lars >
Re: AND vs. OR query performance
Thanks for the clarification. The behaviour I'm seeing is that OR queries are almost *twice* as performant as AND queries, so that's probably down to my specific setup/data. I'll try to investigate further. Lars On Mon, 12 May 2008 19:35:00 -0400 "Yonik Seeley" <[EMAIL PROTECTED]> wrote: > In general, AND will perform better than OR (because of skipping in > the scorers). But if the number of documents matching the AND is > close to that matching the OR query, then skipping doesn't gain you > much and probably has a little more overhead. > > -Yonik > > On Sun, May 11, 2008 at 4:04 AM, Lars Kotthoff <[EMAIL PROTECTED]> wrote: > > Dear list, > > > > during some performance experiments I have found that queries with ORed > > search > > terms are significantly faster than queries with ANDed search terms, > > everything > > else being equal. > > > > Does anybody know whether this is the generally expected behaviour? > > > > Thanks, > > > > Lars > > >
Selecting data with an order on string field causes slow commits from then on
We have a table that has roughly 1M rows. If we run a query against the table and order by a string field that has a large number of unique values then subsequent commits of any other document takes much longer. If we don't run the query or if we order on a string field with very few unique values (or don't order at all) the commits are unaffected. Our question is, why does running a query and ordering on a string field with a large number unique values affect all subsequent commits??? The only way we've found to fix the problem, once it has started, is to restart solr. Our test begins by executing a SOLR add like so: User 13 User:13 ... ==> This takes approx 0.3 sec Then we do a SOLR select: wt=ruby&q=%28solr_categories_s%3Arestaurant%29%20AND%20type_t%3ARatable%3Bsolr_title_s%20asc&start=0&fl=pk_i%2Cscore&qt=standard ==> This takes approx 2 sec Then we execute the SAME SOLR command above and ==> This takes approx 3 sec CONFIG INFO: 512mb heap, JVM 1.5, lucene-core-2007-05-20_00-04-53.jar, solr 1.2 -Chris & David
Re: Selecting data with an order on string field causes slow commits from then on
This was answered yesterday on the list: http://www.nabble.com/Re%3A-exceeded-limit-of-maxWarmingSearchers-p17165631.html regards, -Mike On 12-May-08, at 6:12 PM, David Stevenson wrote: We have a table that has roughly 1M rows. If we run a query against the table and order by a string field that has a large number of unique values then subsequent commits of any other document takes much longer. If we don't run the query or if we order on a string field with very few unique values (or don't order at all) the commits are unaffected. Our question is, why does running a query and ordering on a string field with a large number unique values affect all subsequent commits??? The only way we've found to fix the problem, once it has started, is to restart solr. Our test begins by executing a SOLR add like so: User 13 User:13 ... ==> This takes approx 0.3 sec Then we do a SOLR select: wt=ruby&q=%28solr_categories_s%3Arestaurant%29%20AND%20type_t %3ARatable%3Bsolr_title_s%20asc&start=0&fl=pk_i%2Cscore&qt=standard ==> This takes approx 2 sec Then we execute the SAME SOLR command above and ==> This takes approx 3 sec CONFIG INFO: 512mb heap, JVM 1.5, lucene-core-2007-05-20_00-04-53.jar, solr 1.2 -Chris & David
Field Grouping
Hello. I was wondering if there is a way to get solr to return fields with the same value for a particular field together. For example I might want to have all the documents with exactly the same name field all returned next to each other. Is this possible? Thanks! -- View this message in context: http://www.nabble.com/Field-Grouping-tp17199592p17199592.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Field Grouping
On Mon, May 12, 2008 at 9:58 PM, oleg_gnatovskiy <[EMAIL PROTECTED]> wrote: > Hello. I was wondering if there is a way to get solr to return fields with > the same value for a particular field together. For example I might want to > have all the documents with exactly the same name field all returned next to > each other. Is this possible? Thanks! Sort by that field. Since you can only sort by fields with a single term at most (this rules out full-text fields), you might want to do a copyField of the "name" field to something like a "name_s" field which is of type string (which can be sorted on). -Yonik
Re: Field Grouping
But I don't want the search results to be ranked based on that field. I only want all the documents with the same value grouped together... The way my system is set up, most documents will have that field empty. Thus, if Is rot by it, those documents that have a value will bubble to the top... Yonik Seeley wrote: > > On Mon, May 12, 2008 at 9:58 PM, oleg_gnatovskiy > <[EMAIL PROTECTED]> wrote: >> Hello. I was wondering if there is a way to get solr to return fields >> with >> the same value for a particular field together. For example I might want >> to >> have all the documents with exactly the same name field all returned >> next to >> each other. Is this possible? Thanks! > > Sort by that field. Since you can only sort by fields with a single > term at most (this rules out full-text fields), you might want to do a > copyField of the "name" field to something like a "name_s" field which > is of type string (which can be sorted on). > > -Yonik > > -- View this message in context: http://www.nabble.com/Field-Grouping-tp17199592p17201424.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: JMX monitoring
Hi Marshall, I've uploaded a new patch which works off the current trunk. Let me know if you run into any problems with this. On Tue, May 13, 2008 at 2:36 AM, Marshall Weir <[EMAIL PROTECTED]> wrote: > Hi, > > I'm new to Solr and I've been attempting to get JMX monitoring working. I > can get simple information by using the -Dcom.sun.management.jmxremote > command line switch, but I'd like to get more useful statistics. I've been > working on applying the SOLR-256 and jmx patch, but the original revisions > are pretty old and I'm having to spend a lot of time wandering through the > source. > > Is there a better solution to getting this working or a newer version of > the patch? > > Thank you, > Marshall > -- Regards, Shalin Shekhar Mangar.