Re: fieldCache problem OOM exception

2011-07-22 Thread Bernd Fehling
The current status of my installation is that with some tweeking of JAVA I get a runtime of about 2 weeks until OldGen (14GB) is filled to 100 percent and won't free anything even with FullGC. The part of fieldCache in a HeapDump to that time is over 80 percent from the whole heap (20GB). And that

RE: What is the different?

2011-07-22 Thread Pierre GOSSE
Hi, Have you check the queries by using the debugQuery=true parameter ? This could give some hints of what is searched in both cases. Pierre -Message d'origine- De : cnyee [mailto:yeec...@gmail.com] Envoyé : vendredi 22 juillet 2011 05:14 À : solr-user@lucene.apache.org Objet : What is

Re: Geospatial queries in Solr

2011-07-22 Thread David Smiley (@MITRE.org)
Jamie, You are using the field named "point" which is based on PointFieldType. Keep in mind that just because this field type is named this way, does *not* mean at all that other fields don't hold points, or that this one is especially suited to it. Arguably this one is named poorly. This fiel

RE: embeded solrj doesn't refresh index

2011-07-22 Thread Marc Sturlese
Are u indexing with full import? In case yes and the resultant index has similar num of docs (that the one you had before) try setting reopenReaders to false in solrconfig.xml * You have to send the comit, of course. -- View this message in context: http://lucene.472066.n3.nabble.com/embeded-solr

RE: Culr Tika not working with blanks into literal.field

2011-07-22 Thread Peralta Gutiérrez del Álamo
Hi. Is it possible setting fields with blaks values using extract update ? Thanks From: pacopera...@hotmail.com To: solr-user@lucene.apache.org Subject: Culr Tika not working with blanks into literal.field Date: Wed, 20 Jul 2011 12:53:18 + Hi. I'm trying to index bina

RE: Solr 3.3: Exception in thread "Lucene Merge Thread #1"

2011-07-22 Thread mdz-munich
mdz-munich wrote: > > Yeah, indeed. > > But since the VM is equipped with plenty of RAM (22GB) and it works so far > (Solr 3.2) very well with this setup, I AM slightly confused, am I? > > Maybe we should LOWER the dedicated Physical Memory? The remaining 10GB > are used for a second tomcat (8G

RE: commit time and lock

2011-07-22 Thread Pierre GOSSE
Solr still respond to search queries during commit, only new indexations requests will have to wait (until end of commit?). So I don't think your users will experience increased response time during commits (unless your server is much undersized). Pierre -Message d'origine- De : Jonty

RE: Solr 3.3: Exception in thread "Lucene Merge Thread #1"

2011-07-22 Thread mdz-munich
I was wrong. After rebooting tomcat we discovered a new sweetness: /SEVERE: REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@3c753c75 (core.name) has a reference count of 1 22.07.2011 11:52:07 org.apache.solr.common.SolrException log SEVERE: java.lang.RuntimeException: java.io.IOExcep

Re: commit time and lock

2011-07-22 Thread Jonty Rhods
Thanks for clarity. One more thing I want to know about optimization. Right now I am planning to optimize the server in 24 hour. Optimization is also time taking ( last time took around 13 minutes), so I want to know that : 1. when optimization is under process that time will solr server respons

convert date format at indexing time

2011-07-22 Thread Peralta Gutiérrez del Álamo
Hi.I'm indexing binary documents as Word, pdf,... from a file system I'm extracting the attribute attr_creation_date for these documents but the format I'm getting is Wed Jan 14 12:13:00 CET 2004 instead 2004-01-14T12:12:00Z Is It possible to convert date format at indexing time? Thanks Bes

RE: commit time and lock

2011-07-22 Thread Pierre GOSSE
Solr will response for search during optimization, but commits will have to wait the end of the optimization process. During optimization a new index is generated on disk by merging every single file of the current index into one big file, so you're server will be busy, especially regarding dis

Re: Geospatial queries in Solr

2011-07-22 Thread Jamie Johnson
Ah, my mistake then. I will switch to using the geohash field. When doing my query I did run it against geohash but when I got Russia that was more incorrect than point so I stopped using it. Is there a timeline by which you expect the dateline issue to be addressed? I don't believe that will b

Re: Logically equivalent queries but vastly different no of results?

2011-07-22 Thread cnyee
I think I know what it is. The second query has higher scores than the first. The additional condition "domain_ids:(0^1.3 OR 1)" which evaluates to true always - pushed up the scores and allows a LOT more records to pass. Is there a better way of doing this? Regards, Yee -- View this message in

Re: Logically equivalent queries but vastly different no of results?

2011-07-22 Thread Michael Kuhlmann
Am 22.07.2011 14:27, schrieb cnyee: > I think I know what it is. The second query has higher scores than the first. > > The additional condition "domain_ids:(0^1.3 OR 1)" which evaluates to true > always - pushed up the scores and allows a LOT more records to pass. This can't be, because the scor

Re: Solr 3.3: Exception in thread "Lucene Merge Thread #1"

2011-07-22 Thread Yonik Seeley
> IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux amd64-64 I'm confused why MMapDirectory is getting used with the IBM JVM... I had thought it would default to NIOFSDirectory on Linux w/ a non Oracle JVM. Are you specifically selecting MMapDirectory in solrconfig.xml? Can you try the Oracle JVM

Re: commit time and lock

2011-07-22 Thread Marc SCHNEIDER
Hello, Pierre, can you tell us where you read that? "I've read here that optimization is not always a requirement to have an efficient index, due to some low level changes in lucene 3.xx" Marc. On Fri, Jul 22, 2011 at 2:10 PM, Pierre GOSSE wrote: > Solr will response for search during optimizat

Re: Updating fields in an existing document

2011-07-22 Thread Marc SCHNEIDER
Yes that's it if you add twice the same document (ie with the same id) it will replace it. On Thu, Jul 21, 2011 at 7:46 PM, Benson Margulies wrote: > A followup. The wiki has a whole discussion of the 'update' XML > message. But solrj has nothing like it. Does that really exist? Is > there a reas

Re: Solr 3.3: Exception in thread "Lucene Merge Thread #1"

2011-07-22 Thread Yonik Seeley
On Fri, Jul 22, 2011 at 9:44 AM, Yonik Seeley wrote: >> IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux amd64-64 > > I'm confused why MMapDirectory is getting used with the IBM JVM... I > had thought it would default to NIOFSDirectory on Linux w/ a non > Oracle JVM. I verified that the MMapDirec

Re: Geospatial queries in Solr

2011-07-22 Thread Smiley, David W.
Wrapping the dateline or being able to encircle one of the poles (but not necessarily both) are polygon query features that I feel need to be addressed before this module is first released (whenever that is), definitely. And arguably before benchmarking, which we're looking to focus on soon. S

Re: Geospatial queries in Solr

2011-07-22 Thread Jamie Johnson
Thanks David. I'm going to continue to play with this, as an FYI you were spot on, changing to use a geohash field worked with the previous test. Again I appreciate all of the information, and awesome work. On Fri, Jul 22, 2011 at 10:05 AM, Smiley, David W. wrote: > Wrapping the dateline or be

RE: commit time and lock

2011-07-22 Thread Pierre GOSSE
Hi Mark I've read that in a thread title " Weird optimize performance degradation", where Erick Erickson states that "Older versions of Lucene would search faster on an optimized index, but this is no longer necessary.", and more recently in a thread you initiated a month ago "Question about op

problem searching on non standard characters

2011-07-22 Thread Jason Toy
How does one search for words with characters like # and +. I have tried searching solr with "#test" and "\#test" but all my results always come up with "test" and not "#test". Is this some kind of configuration option I need to set in solr? -- - sent from my mobile 6176064373

Re: commit time and lock

2011-07-22 Thread Shawn Heisey
On 7/22/2011 8:23 AM, Pierre GOSSE wrote: I've read that in a thread title " Weird optimize performance degradation", where Erick Erickson states that "Older versions of Lucene would search faster on an optimized index, but this is no longer necessary.", and more recently in a thread you initia

Re: problem searching on non standard characters

2011-07-22 Thread François Schiettecatte
Check your analyzers to make sure that these characters are not getting stripped out in the tokenization process, the url for 3.3 is somewhere along the lines of: http://localhost/solr/admin/analysis.jsp?highlight=on And you should be indeed be searching on "\#test". François On Jul 2

Re: problem searching on non standard characters

2011-07-22 Thread Shawn Heisey
On 7/22/2011 8:34 AM, Jason Toy wrote: How does one search for words with characters like # and +. I have tried searching solr with "#test" and "\#test" but all my results always come up with "test" and not "#test". Is this some kind of configuration option I need to set in solr? I would gues

Re: problem searching on non standard characters

2011-07-22 Thread François Schiettecatte
Adding to my previous reply, I just did a quick check on the 'text_en' and 'text_en_splitting' field types and they both strip leading '#'. Cheers François On Jul 22, 2011, at 10:49 AM, Shawn Heisey wrote: > On 7/22/2011 8:34 AM, Jason Toy wrote: >> How does one search for words with character

RE: Re: previous and next rows of current record

2011-07-22 Thread Jonathan Rochkind
> Yes exactly same problem i m facing. Is there any way to resolve this issue.. I am not sure what you mean, "any way to resolve this issue." Did you read and understand what I wrote below? I have nothing more to add. What is it you're looking for? The way to provide that kind of next/previou

RE: commit time and lock

2011-07-22 Thread Pierre GOSSE
Merging does not happen often enough to keep deleted documents to a low enough count ? Maybe there's a need to have "partial" optimization available in solr, meaning that segment with too much deleted document could be copied to a new file without unnecessary datas. That way cleaning deleted da

RE: commit time and lock

2011-07-22 Thread Jonathan Rochkind
How old is 'older'? I'm pretty sure I'm still getting much faster performance on an optimized index in Solr 1.4. This could be due to the nature of my index and queries (which include some medium sized stored fields, and extensive facetting -- facetting on up to a dozen fields in every reques

Re: commit time and lock

2011-07-22 Thread Shawn Heisey
On 7/22/2011 9:32 AM, Pierre GOSSE wrote: Merging does not happen often enough to keep deleted documents to a low enough count ? Maybe there's a need to have "partial" optimization available in solr, meaning that segment with too much deleted document could be copied to a new file without unn

Re: Solr 3.3: Exception in thread "Lucene Merge Thread #1"

2011-07-22 Thread mdz-munich
Hi Yonik, thanks for your reply! > Are you specifically selecting MMapDirectory in solrconfig.xml? Nope. We installed Oracle's Runtime from http://java.com/de/download/linux_manual.jsp?locale=de /java.runtime.name = Java(TM) SE Runtime Environment sun.boot.library.path = /usr/java/jdk1.6.0

Re: Solr 3.3: Exception in thread "Lucene Merge Thread #1"

2011-07-22 Thread Yonik Seeley
OK, best guess is that you're going over some per-process address space limit. Try seeing what "ulimit -a" says. -Yonik http://www.lucidimagination.com On Fri, Jul 22, 2011 at 12:51 PM, mdz-munich wrote: > Hi Yonik, > > thanks for your reply! > >> Are you specifically selecting MMapDirectory in

Re: Solr 3.3: Exception in thread "Lucene Merge Thread #1"

2011-07-22 Thread mdz-munich
It says: /core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 257869 max locked memory (kbytes, -l) 64 max memory size (kbytes,

Re: Solr 3.3: Exception in thread "Lucene Merge Thread #1"

2011-07-22 Thread Yonik Seeley
Yep, there ya go... your OS configuration is limiting you to 27G of virtual address space per process. Consider setting that to unlimited. -Yonik http://www.lucidimagination.com On Fri, Jul 22, 2011 at 1:05 PM, mdz-munich wrote: > It says: > > /core file size          (blocks, -c) 0 > data se

Re: Solr 3.3: Exception in thread "Lucene Merge Thread #1"

2011-07-22 Thread mdz-munich
Maybe it's important: - The OS (Open Suse 10) is virtualized on VMWare - Network Attached Storage Best regards Sebastian -- View this message in context: http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3191986.html Sent from the Solr - User mail

Rounding errors in solr

2011-07-22 Thread Brian Lamb
Hi all, I've noticed some peculiar scoring issues going on in my application. For example, I have a field that is multivalued and has several records that have the same value. For example, National Society of Animal Lovers Nat. Soc. of Ani. Lov. I have about 300 records with that exact val

Re: Rounding errors in solr

2011-07-22 Thread Yonik Seeley
On Fri, Jul 22, 2011 at 4:11 PM, Brian Lamb wrote: > I've noticed some peculiar scoring issues going on in my application. For > example, I have a field that is multivalued and has several records that > have the same value. For example, > > >  National Society of Animal Lovers >  Nat. Soc. of An

saving timestamps in trunk broken?

2011-07-22 Thread Jason Toy
In Solr 1.3.1 I am able to store timestamps in my docs so that I query them. In trunk when I try to store a doc with a timestamp I get a sever error, is there a different way I should store this data or is this a bug? Jul 22, 2011 7:20:14 PM org.apache.solr.update.processor.LogUpdateProcessor fi

Re: saving timestamps in trunk broken?

2011-07-22 Thread Chris Hostetter
: In Solr 1.3.1 I am able to store timestamps in my docs so that I query them. : : In trunk when I try to store a doc with a timestamp I get a sever error, is : there a different way I should store this data or is this a bug? are you sure your schema has that field defined as a (Trie)DateField ?

Re: saving timestamps in trunk broken?

2011-07-22 Thread Jason Toy
I haven't modified my schema in the older solr or trunk solr,is it required to modify my schema to support timestamps? On Fri, Jul 22, 2011 at 4:45 PM, Chris Hostetter wrote: > : In Solr 1.3.1 I am able to store timestamps in my docs so that I query > them. > : > : In trunk when I try to store a

Spellcheck compounded words

2011-07-22 Thread O. Klein
How do I get spellchecker to suggest compounded words? Like. q=sail booat and suggestion/collate is "sailboat" and "sail boat" -- View this message in context: http://lucene.472066.n3.nabble.com/Spellcheck-compounded-words-tp3192748p3192748.html Sent from the Solr - User mailing list archive at

Re: saving timestamps in trunk broken?

2011-07-22 Thread Jason Toy
This is the document I am posting: Post 75004824785129473Post2011-05-30T01:05:18ZNew YorkUnited Stateshello world! In my schema.xml file I have these date fields, do I need more? On Fri, Jul 22, 2011 at 5:00 PM, Jason Toy wrote: > I haven't modified my schema in the older solr or tr

Re: saving timestamps in trunk broken?

2011-07-22 Thread Jason Toy
Hi Chris, you were correct, the filed was getting set as a double. Thanks for the help. On Fri, Jul 22, 2011 at 7:03 PM, Jason Toy wrote: > This is the document I am posting: > Post > 75004824785129473Post name="at_d">2011-05-30T01:05:18ZNew > YorkUnited States name="data_text">hello world! >

Problem in execution of error call in ajax request if solr server is not running

2011-07-22 Thread Romi
*$.ajax({ url: solrURL+"/solr/db/select/?qt=dismax&wt=json&&start="+start+"&rows="+end+"&q="+query+"&json.wrf=?", async:false, dataType: 'json', success: function(){ getSolrResponse(sort,order,itemPerPage,showPage,query,solrURL);

Error with custom search component which adds filter

2011-07-22 Thread Jamie Johnson
I have a custom search component which does the following in process SolrQueryRequest req = rb.req; SolrParams params = req.getParams(); QueryWrapperFilter qwf = new QueryWrapperFilter(rb.getQuery()); Filter filter = new TestFi