Solr failing on "y" charakter in string?
Hi, I have the following setting: schema.xml: "" the "text" field-type was updated with the "preserveOriginal=1" option in the schema I have the following string indexd in the field "kunde" "Harry Heim KG" Now when I search for "kunde:harry*" it gives me an empty result. When I search for "kunde:harry" I get the right result. Also "kunde:harr*" works just fine. The strange thing is that with every other string (for example "kunde:heim*") I will get the right result. So why not on "harry*" with an "y*" at the end? kind regards, S. -- View this message in context: http://www.nabble.com/Solr-failing-on-%22y%22-charakter-in-string--tp24783211p24783211.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr failing on "y" charakter in string?
I believe it's because wildcard queries are not stemmed. During indexing "harry" probably got stemmed to "harr", so now "harry*" doesn't match, because there is no "harry" token in that string, only "harr". Why wildcard queries are not analyzed is described in the Lucene FAQ on the Lucene Wiki. You could also try searching for kunde:Harr* for example (not the upper-case Harr). I bet it won't result in a hit for the same reason - at index time you probably lower-case tokens with LowerCaseFilter(Factory), and if you search for Harr*, the lower-casing won't happen because the query string with the wildcard character isn't analyzed. Otis -- Sematext is hiring -- http://sematext.com/about/jobs.html?mls Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR - Original Message > From: gateway0 > To: solr-user@lucene.apache.org > Sent: Sunday, August 2, 2009 7:30:19 PM > Subject: Solr failing on "y" charakter in string? > > > Hi, > > I have the following setting: > schema.xml: > "" > the "text" field-type was updated with the "preserveOriginal=1" option in > the schema > > I have the following string indexd in the field "kunde" > "Harry Heim KG" > > Now when I search for "kunde:harry*" it gives me an empty result. > > When I search for "kunde:harry" I get the right result. Also "kunde:harr*" > works just fine. > > The strange thing is that with every other string (for example > "kunde:heim*") I will get the right result. > > So why not on "harry*" with an "y*" at the end? > > kind regards, S. > -- > View this message in context: > http://www.nabble.com/Solr-failing-on-%22y%22-charakter-in-string--tp24783211p24783211.html > Sent from the Solr - User mailing list archive at Nabble.com.
Re: How to get a stack trace
Your heap may be just too small or you may have a memory leak. A stack trace may not help you since the thread encountered the OutOfMemoryError may not be where the memory leak is. A heap dump will tell you what's using up all the memory in your heap. Bill On Thu, Jul 30, 2009 at 3:54 PM, Nicolae Mihalache wrote: > Hello, > > I'm a new user of solr but I have worked a bit with Lucene before. I get > some out of memory exception when optimizing the index through Solr and I > would like to find out why. > However, the only message I get on standard output is: > Jul 30, 2009 9:20:22 PM org.apache.solr.common.SolrException log > SEVERE: java.lang.OutOfMemoryError: Java heap space > > Is there a way to get a stack trace for this exception? I had a look into > the java.util.logging options and didn't find anything. > > My solr runs in some standard configuration inside jetty. > Any suggestion would be appreciated. > > Thanks, > nicolae > > > >
Re: 99.9% uptime requirement
we have been using Solr in production for years. The only kind of crash that we have observed is a JVM crash. On Fri, Jul 31, 2009 at 9:48 PM, Robert Petersen wrote: > Hi all, > > My solr project powers almost all the pages in our site and so needs to > be up period. My question is what can I do to ensure that happens? > Does solr ever crash, assuming reasonable load conditions and no extreme > index sizes? > > I saw some comments about running solr under daemontools in order to get > an auto-restart on crashes. From what I have seen so far in my limited > experience, solr is very stable and never crashes (so far). Does anyone > else have this requirement and if so how do they deal with it? Is > anyone else running solr under daemontools in a production site? > > Thanks for any input you might have, > Robi > -- - Noble Paul | Principal Engineer| AOL | http://aol.com
change sort order for MoreLikeThis
Hi, I'm looking at changing the result order when searching by MLT. I tried the sort=, but it's not working. I check the wiki and can't find anything. Is there a way to do this? Thanks, /Laurence
Re: Lock timed out 2 worker running
Hi Chris, Sorry for the very late reply. As a work around we sent the locking to single and we turned-off one of our workers. And to answer your question, please see below: 2009/7/17 Chris Hostetter > > This is relaly odd. > > Just to clarify... > 1) you are running a normal solr installation (in a servlet > container) and using SolrJ to send updates to Solr from another > application, correct? Yep, we are running out-of-the-bo solr installation using tomcat as servel container. Both of our index workers are using SolrJ to send update to Solr. > > 2) Do you have any special custom plugins running Nope, everything is out-of-the-box. > > 3) do you have any other apps that might be attempting to access the index > directly? Actually there is another 3rd apps (an instance of index workers but not all functionality are enabled). It only send a delete request to Solr but it's via SolrJ as well. And I double checked that all this workers are hitting the same solr base url > > 4) what OS are you using? ... what type of filesystem? (local disk or some > shared network drive) CentOS 5.2 local disk. > > 5) are these errors appearing after Solr crashes and you restart it? Yep, I can't find the logs but it's something like can't obtain lock for .lck Need to delete that fiile in order to start the solr properly > > 6) what version of Solr are you using? The later 1.3.0 release. > > > No matter how many worker threads you have, there should only be one > IndexWriter using the index/lockfile from Solr ... so this error should > really never happen in normal usage. I'm not sure what you mean by normal usage. But aside from the 2 workers (or 3), we are running rsync and snapshooter every 30 secs. and on the slave, we are running snappuller every 30 secs. as well. This is a requirement to pick up the latest changes right away. Thanks, /Laurence > > > > : Jul 10, 2009 4:01:55 AM org.apache.solr.common.SolrException log > : SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain > timed > : out: SimpleFSLock@ > : > /projects/msim/indexdata/data/index/lucene-0614ba206dd0e0871ca4eecf8f2e853a-write.lock > : at org.apache.lucene.store.Lock.obtain(Lock.java:85) > : at org.apache.lucene.index.IndexWriter.init(IndexWriter.java:1140) > : at org.apache.lucene.index.IndexWriter.(IndexWriter.java:938) > : at > org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:116) > : at > : > org.apache.solr.update.UpdateHandler.createMainIndexWriter(UpdateHandler.java:122) > : at > : > org.apache.solr.update.DirectUpdateHandler2.openWriter(DirectUpdateHandler2.java:167) > : at > : > org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:221) > : at > : > org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:59) > : at > : > org.apache.solr.handler.XmlUpdateRequestHandler.processUpdate(XmlUpdateRequestHandler.java:196) > : at > : > org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody(XmlUpdateRequestHandler.java:123) > : at > : > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131) > : at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204) > : at > : > org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303) > : at > : > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232) > : at > : > org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215) > : at > : > org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188) > : at > : > org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:210) > : at > : > org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172) > : at > : > org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) > : at > : > org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117) > : at > : > org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108) > : at > org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:542) > : at > : > org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:151) > : at > : > org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:870) > : at > : > org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665) > : at > : > org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528) > : at > : > org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81) > : at > : > org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685) > : at java.lang.Thread.run(Thread.java:619) > > > > -Hoss > >