The current status of my installation is that with some tweeking of
JAVA I get a runtime of about 2 weeks until OldGen (14GB) is filled
to 100 percent and won't free anything even with FullGC.
The part of fieldCache in a HeapDump to that time is over 80 percent
from the whole heap (20GB). And that
Hi,
Have you check the queries by using the debugQuery=true parameter ? This could
give some hints of what is searched in both cases.
Pierre
-Message d'origine-
De : cnyee [mailto:yeec...@gmail.com]
Envoyé : vendredi 22 juillet 2011 05:14
À : solr-user@lucene.apache.org
Objet : What is
Jamie,
You are using the field named "point" which is based on PointFieldType.
Keep in mind that just because this field type is named this way, does *not*
mean at all that other fields don't hold points, or that this one is
especially suited to it. Arguably this one is named poorly. This fiel
Are u indexing with full import? In case yes and the resultant index has
similar num of docs (that the one you had before) try setting reopenReaders
to false in solrconfig.xml
* You have to send the comit, of course.
--
View this message in context:
http://lucene.472066.n3.nabble.com/embeded-solr
Hi.
Is it possible setting fields with blaks values using extract update ?
Thanks
From: pacopera...@hotmail.com
To: solr-user@lucene.apache.org
Subject: Culr Tika not working with blanks into literal.field
Date: Wed, 20 Jul 2011 12:53:18 +
Hi.
I'm trying to index bina
mdz-munich wrote:
>
> Yeah, indeed.
>
> But since the VM is equipped with plenty of RAM (22GB) and it works so far
> (Solr 3.2) very well with this setup, I AM slightly confused, am I?
>
> Maybe we should LOWER the dedicated Physical Memory? The remaining 10GB
> are used for a second tomcat (8G
Solr still respond to search queries during commit, only new indexations
requests will have to wait (until end of commit?). So I don't think your users
will experience increased response time during commits (unless your server is
much undersized).
Pierre
-Message d'origine-
De : Jonty
I was wrong.
After rebooting tomcat we discovered a new sweetness:
/SEVERE: REFCOUNT ERROR: unreferenced org.apache.solr.core.SolrCore@3c753c75
(core.name) has a reference count of 1
22.07.2011 11:52:07 org.apache.solr.common.SolrException log
SEVERE: java.lang.RuntimeException: java.io.IOExcep
Thanks for clarity.
One more thing I want to know about optimization.
Right now I am planning to optimize the server in 24 hour. Optimization is
also time taking ( last time took around 13 minutes), so I want to know that
:
1. when optimization is under process that time will solr server respons
Hi.I'm indexing binary documents as Word, pdf,... from a file system
I'm extracting the attribute attr_creation_date for these documents but the
format I'm getting is Wed Jan 14 12:13:00 CET 2004 instead 2004-01-14T12:12:00Z
Is It possible to convert date format at indexing time?
Thanks Bes
Solr will response for search during optimization, but commits will have to
wait the end of the optimization process.
During optimization a new index is generated on disk by merging every single
file of the current index into one big file, so you're server will be busy,
especially regarding dis
Ah, my mistake then. I will switch to using the geohash field. When
doing my query I did run it against geohash but when I got Russia that
was more incorrect than point so I stopped using it.
Is there a timeline by which you expect the dateline issue to be
addressed? I don't believe that will b
I think I know what it is. The second query has higher scores than the first.
The additional condition "domain_ids:(0^1.3 OR 1)" which evaluates to true
always - pushed up the scores and allows a LOT more records to pass.
Is there a better way of doing this?
Regards,
Yee
--
View this message in
Am 22.07.2011 14:27, schrieb cnyee:
> I think I know what it is. The second query has higher scores than the first.
>
> The additional condition "domain_ids:(0^1.3 OR 1)" which evaluates to true
> always - pushed up the scores and allows a LOT more records to pass.
This can't be, because the scor
> IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux amd64-64
I'm confused why MMapDirectory is getting used with the IBM JVM... I
had thought it would default to NIOFSDirectory on Linux w/ a non
Oracle JVM.
Are you specifically selecting MMapDirectory in solrconfig.xml?
Can you try the Oracle JVM
Hello,
Pierre, can you tell us where you read that?
"I've read here that optimization is not always a requirement to have an
efficient index, due to some low level changes in lucene 3.xx"
Marc.
On Fri, Jul 22, 2011 at 2:10 PM, Pierre GOSSE wrote:
> Solr will response for search during optimizat
Yes that's it if you add twice the same document (ie with the same id) it
will replace it.
On Thu, Jul 21, 2011 at 7:46 PM, Benson Margulies wrote:
> A followup. The wiki has a whole discussion of the 'update' XML
> message. But solrj has nothing like it. Does that really exist? Is
> there a reas
On Fri, Jul 22, 2011 at 9:44 AM, Yonik Seeley
wrote:
>> IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux amd64-64
>
> I'm confused why MMapDirectory is getting used with the IBM JVM... I
> had thought it would default to NIOFSDirectory on Linux w/ a non
> Oracle JVM.
I verified that the MMapDirec
Wrapping the dateline or being able to encircle one of the poles (but not
necessarily both) are polygon query features that I feel need to be addressed
before this module is first released (whenever that is), definitely. And
arguably before benchmarking, which we're looking to focus on soon. S
Thanks David. I'm going to continue to play with this, as an FYI you
were spot on, changing to use a geohash field worked with the previous
test. Again I appreciate all of the information, and awesome work.
On Fri, Jul 22, 2011 at 10:05 AM, Smiley, David W. wrote:
> Wrapping the dateline or be
Hi Mark
I've read that in a thread title " Weird optimize performance degradation",
where Erick Erickson states that "Older versions of Lucene would search faster
on an optimized index, but this is no longer necessary.", and more recently in
a thread you initiated a month ago "Question about op
How does one search for words with characters like # and +. I have tried
searching solr with "#test" and "\#test" but all my results always come up
with "test" and not "#test". Is this some kind of configuration option I
need to set in solr?
--
- sent from my mobile
6176064373
On 7/22/2011 8:23 AM, Pierre GOSSE wrote:
I've read that in a thread title " Weird optimize performance degradation", where Erick Erickson
states that "Older versions of Lucene would search faster on an optimized index, but this is no longer
necessary.", and more recently in a thread you initia
Check your analyzers to make sure that these characters are not getting
stripped out in the tokenization process, the url for 3.3 is somewhere along
the lines of:
http://localhost/solr/admin/analysis.jsp?highlight=on
And you should be indeed be searching on "\#test".
François
On Jul 2
On 7/22/2011 8:34 AM, Jason Toy wrote:
How does one search for words with characters like # and +. I have tried
searching solr with "#test" and "\#test" but all my results always come up
with "test" and not "#test". Is this some kind of configuration option I
need to set in solr?
I would gues
Adding to my previous reply, I just did a quick check on the 'text_en' and
'text_en_splitting' field types and they both strip leading '#'.
Cheers
François
On Jul 22, 2011, at 10:49 AM, Shawn Heisey wrote:
> On 7/22/2011 8:34 AM, Jason Toy wrote:
>> How does one search for words with character
> Yes exactly same problem i m facing. Is there any way to resolve this issue..
I am not sure what you mean, "any way to resolve this issue." Did you read and
understand what I wrote below? I have nothing more to add. What is it you're
looking for?
The way to provide that kind of next/previou
Merging does not happen often enough to keep deleted documents to a low enough
count ?
Maybe there's a need to have "partial" optimization available in solr, meaning
that segment with too much deleted document could be copied to a new file
without unnecessary datas. That way cleaning deleted da
How old is 'older'? I'm pretty sure I'm still getting much faster performance
on an optimized index in Solr 1.4.
This could be due to the nature of my index and queries (which include some
medium sized stored fields, and extensive facetting -- facetting on up to a
dozen fields in every reques
On 7/22/2011 9:32 AM, Pierre GOSSE wrote:
Merging does not happen often enough to keep deleted documents to a low enough
count ?
Maybe there's a need to have "partial" optimization available in solr, meaning
that segment with too much deleted document could be copied to a new file without
unn
Hi Yonik,
thanks for your reply!
> Are you specifically selecting MMapDirectory in solrconfig.xml?
Nope.
We installed Oracle's Runtime from
http://java.com/de/download/linux_manual.jsp?locale=de
/java.runtime.name = Java(TM) SE Runtime Environment
sun.boot.library.path = /usr/java/jdk1.6.0
OK, best guess is that you're going over some per-process address space limit.
Try seeing what "ulimit -a" says.
-Yonik
http://www.lucidimagination.com
On Fri, Jul 22, 2011 at 12:51 PM, mdz-munich
wrote:
> Hi Yonik,
>
> thanks for your reply!
>
>> Are you specifically selecting MMapDirectory in
It says:
/core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257869
max locked memory (kbytes, -l) 64
max memory size (kbytes,
Yep, there ya go... your OS configuration is limiting you to 27G of
virtual address space per process. Consider setting that to
unlimited.
-Yonik
http://www.lucidimagination.com
On Fri, Jul 22, 2011 at 1:05 PM, mdz-munich
wrote:
> It says:
>
> /core file size (blocks, -c) 0
> data se
Maybe it's important:
- The OS (Open Suse 10) is virtualized on VMWare
- Network Attached Storage
Best regards
Sebastian
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3191986.html
Sent from the Solr - User mail
Hi all,
I've noticed some peculiar scoring issues going on in my application. For
example, I have a field that is multivalued and has several records that
have the same value. For example,
National Society of Animal Lovers
Nat. Soc. of Ani. Lov.
I have about 300 records with that exact val
On Fri, Jul 22, 2011 at 4:11 PM, Brian Lamb
wrote:
> I've noticed some peculiar scoring issues going on in my application. For
> example, I have a field that is multivalued and has several records that
> have the same value. For example,
>
>
> National Society of Animal Lovers
> Nat. Soc. of An
In Solr 1.3.1 I am able to store timestamps in my docs so that I query them.
In trunk when I try to store a doc with a timestamp I get a sever error, is
there a different way I should store this data or is this a bug?
Jul 22, 2011 7:20:14 PM org.apache.solr.update.processor.LogUpdateProcessor
fi
: In Solr 1.3.1 I am able to store timestamps in my docs so that I query them.
:
: In trunk when I try to store a doc with a timestamp I get a sever error, is
: there a different way I should store this data or is this a bug?
are you sure your schema has that field defined as a (Trie)DateField ?
I haven't modified my schema in the older solr or trunk solr,is it required
to modify my schema to support timestamps?
On Fri, Jul 22, 2011 at 4:45 PM, Chris Hostetter
wrote:
> : In Solr 1.3.1 I am able to store timestamps in my docs so that I query
> them.
> :
> : In trunk when I try to store a
How do I get spellchecker to suggest compounded words?
Like. q=sail booat
and suggestion/collate is "sailboat" and "sail boat"
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spellcheck-compounded-words-tp3192748p3192748.html
Sent from the Solr - User mailing list archive at
This is the document I am posting:
Post
75004824785129473Post2011-05-30T01:05:18ZNew
YorkUnited Stateshello world!
In my schema.xml file I have these date fields, do I need more?
On Fri, Jul 22, 2011 at 5:00 PM, Jason Toy wrote:
> I haven't modified my schema in the older solr or tr
Hi Chris, you were correct, the filed was getting set as a double. Thanks
for the help.
On Fri, Jul 22, 2011 at 7:03 PM, Jason Toy wrote:
> This is the document I am posting:
> Post
> 75004824785129473Post name="at_d">2011-05-30T01:05:18ZNew
> YorkUnited States name="data_text">hello world!
>
*$.ajax({
url:
solrURL+"/solr/db/select/?qt=dismax&wt=json&&start="+start+"&rows="+end+"&q="+query+"&json.wrf=?",
async:false,
dataType: 'json',
success: function(){
getSolrResponse(sort,order,itemPerPage,showPage,query,solrURL);
I have a custom search component which does the following in process
SolrQueryRequest req = rb.req;
SolrParams params = req.getParams();
QueryWrapperFilter qwf = new QueryWrapperFilter(rb.getQuery());
Filter filter = new TestFi
45 matches
Mail list logo