I took another look at the stack trace and I'm pretty sure the issue is
with NULL values in one of the sort fields. The null pointer is occurring
during the comparison of sort values. See line 85 of:
https://github.com/apache/lucene-solr/blob/branch_5_5/solr/solrj/src/java/org/apache/solr/client/so
Hi Joel,
I don't have any solr documents that have NULL values for the sort fields I
use in my queries.
Thanks!
On Sun, Dec 18, 2016 at 12:56 PM, Joel Bernstein wrote:
> Ok, based on the stack trace I suspect one of your sort fields has NULL
> values, which in the 5x branch could produce null
Ok, based on the stack trace I suspect one of your sort fields has NULL
values, which in the 5x branch could produce null pointers if a segment had
no values for a sort field. This is also fixed in the Solr 6x branch.
Joel Bernstein
http://joelsolr.blogspot.com/
On Sat, Dec 17, 2016 at 2:44 PM, C
Here is the stack trace.
java.lang.NullPointerException
at
org.apache.solr.client.solrj.io.comp.FieldComparator$2.compare(FieldComparator.java:85)
at
org.apache.solr.client.solrj.io.comp.FieldComparator.compare(FieldComparator.java:92)
at
org.apache.solr.client.solrj.io.
If you could provide the json parse exception stack trace, it might help to
predict issue there.
On Fri, Dec 16, 2016 at 5:52 PM, Chetas Joshi
wrote:
> Hi Joel,
>
> The only NON alpha-numeric characters I have in my data are '+' and '/'. I
> don't have any backslashes.
>
> If the special charac
Hi Joel,
The only NON alpha-numeric characters I have in my data are '+' and '/'. I
don't have any backslashes.
If the special characters was the issue, I should get the JSON parsing
exceptions every time irrespective of the index size and irrespective of
the available memory on the machine. That
The Streaming API may have been throwing exceptions because the JSON
special characters were not escaped. This was fixed in Solr 6.0.
Joel Bernstein
http://joelsolr.blogspot.com/
On Fri, Dec 16, 2016 at 4:34 PM, Chetas Joshi
wrote:
> Hello,
>
> I am running Solr 5.5.0.
> It is a solrCloud
Hello,
I am running Solr 5.5.0.
It is a solrCloud of 50 nodes and I have the following config for all the
collections.
maxShardsperNode: 1
replicationFactor: 1
I was using Streaming API to get back results from Solr. It worked fine for
a while until the index data size reached beyond 40 GB per sh
On 11/7/2016 6:27 AM, Mugeesh Husain wrote:
> For the large amount of data set, going to implement many shard and
> many node. I am unaware of performance tuning in solr ? how people use
> or check solr performance ? Is there any open source tool or i should
> create my own for
For the large amount of data set, going to implement many shard and many
node.
I am unaware of performance tuning in solr ? how people use or check solr
performance ?
Is there any open source tool or i should create my own for this and how ?
Thanks
Mugeesh
--
View this message in context
Hi
I have a few filter queries that use multiple cores join to filter
documents. After I inverted those joins they became slower. So, it looks
something like that:
I used to query "product" core with query that contains fq={!join to=tags
from=preferred_tags fromIndex=user}(country:US AND
...)&fq=
returns only 5 solr documents. Under load
> > condition it takes 100 ms to 2000 ms.
> >
> >
> > -Original Message-
> > From: Maulin Rathod
> > Sent: 03 March 2016 12:24
> > To: solr-user@lucene.apache.org
> > Subject: RE: Solr Configuration (Caching & RAM
it takes 100 ms to 2000 ms.
>
>
> -Original Message-
> From: Maulin Rathod
> Sent: 03 March 2016 12:24
> To: solr-user@lucene.apache.org
> Subject: RE: Solr Configuration (Caching & RAM) for performance Tuning
>
> we do soft commit when we insert/update document.
>
oad
condition it takes 100 ms to 2000 ms.
-Original Message-
From: Maulin Rathod
Sent: 03 March 2016 12:24
To: solr-user@lucene.apache.org
Subject: RE: Solr Configuration (Caching & RAM) for performance Tuning
we do soft commit when we insert/update document.
//Insert D
: solr-user@lucene.apache.org
Subject: Re: Solr Configuration (Caching & RAM) for performance Tuning
1) Experiment with the autowarming settings in solrconfig.xml. Since in your
case, you're indexing so frequently consider setting the count to a low number,
so that not a lot of time is spen
1) Experiment with the autowarming settings in solrconfig.xml. Since in
your case, you're indexing so frequently consider setting the count to a
low number, so that not a lot of time is spent warming the caches.
Alternatively if you're not very big on initial query response times being
small, you c
Hi,
We are using Solr 5.2 (on windows 2012 server/jdk 1.8) for document content
indexing/querying. We found that querying slows down intermittently under load
condition.
In our analysis we found two issues.
1) Solr is not effectively using caching.
Whenever new document indexed, it opens new
What a solr version, query parameters and debug output?
26.01.2016 6:38 пользователь "Bhawna Asnani"
написал:
> Hi,
> I am using solr multicore join queries for some admin filters. The queries
> are really slow taking up to 40-60 seconds ins some cases.
>
> I recently read that the schema field u
Hi,
I am using solr multicore join queries for some admin filters. The queries
are really slow taking up to 40-60 seconds ins some cases.
I recently read that the schema field used to join to should have
'docValues=true'.
Besides that, any suggestion to improve the performance?
-Bhawna
Hello,
What's you OS/cpu? is it a VM or real hardware? which jvm do you run? with
which parameters? have you checked GC log? what's the index size? what's a
typical query parameters? what's an average number of results in the
query? have you tried to run query with debugQuery=true during hard loa
Hi users,
Could you please help us on tuning the solr search performance. we have tried
to do some PT on solr instance with 8GB RAM and 50,000 record in index. and we
got 33 concurrent usr hitting the instance with on avg of 17.5 hits per second
with response time 2 seconds. as it is very high
On Wed, Jul 27, 2011 at 4:12 PM, Fuad Efendi wrote:
> Thanks Robert!!!
>
> "Submitted On 26-JUL-2011" - yesterday.
>
> This option was popular in HbaseŠ
Then you should tell them also, not to use it, if they want their loops to work.
--
lucidimagination.com
Thanks Robert!!!
"Submitted On 26-JUL-2011" - yesterday.
This option was popular in Hbase
On 11-07-27 3:58 PM, "Robert Muir" wrote:
>Don't use this option, these optimizations are buggy:
>
>http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7070134
>
>
>On Wed, Jul 27, 2011 at 3:56 PM, Fuad
Don't use this option, these optimizations are buggy:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7070134
On Wed, Jul 27, 2011 at 3:56 PM, Fuad Efendi wrote:
> Anyone tried this? I can not start Solr-Tomcat with following options on
> Ubuntu:
>
> JAVA_OPTS="$JAVA_OPTS -Xms2048m -Xmx2048m
Anyone tried this? I can not start Solr-Tomcat with following options on
Ubuntu:
JAVA_OPTS="$JAVA_OPTS -Xms2048m -Xmx2048m -Xmn256m -XX:MaxPermSize=256m"
JAVA_OPTS="$JAVA_OPTS -Dsolr.solr.home=/data/solr -Dfile.encoding=UTF8
-Duser.timezone=GMT
-Djava.util.logging.config.file=/data/solr/logging.pr
larmed by this... it just seems a little strange.
>
> thanks,
> Demian
>
>> -Original Message-
>> From: Erick Erickson [mailto:erickerick...@gmail.com]
>> Sent: Monday, June 06, 2011 11:59 AM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Solr performan
2011 11:59 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr performance tuning - disk i/o?
>
> Polling interval was in reference to slaves in a multi-machine
> master/slave setup. so probably not
> a concern just at present.
>
> Warmup time of 0 is not particularly no
> showing as 0 -- is that normal?
>
> thanks,
> Demian
>
>> -Original Message-
>> From: Erick Erickson [mailto:erickerick...@gmail.com]
>> Sent: Friday, June 03, 2011 4:45 PM
>> To: solr-user@lucene.apache.org
>> Subject: Re: Solr performance
M
> To: solr-user@lucene.apache.org
> Subject: Re: Solr performance tuning - disk i/o?
>
> Quick impressions:
>
> The faceting is usually best done on fields that don't have lots of
> unique
> values for three reasons:
> 1> It's questionable how much use to the
is seems like a big step in the right
> direction. Thanks again for the help!
>
> - Demian
>
>> -Original Message-
>> From: Erick Erickson [mailto:erickerick...@gmail.com]
>> Sent: Friday, June 03, 2011 9:41 AM
>> To: solr-user@lucene.apache.org
>> Subj
://search-lucene.com/
- Original Message
> From: Demian Katz
> To: "solr-user@lucene.apache.org"
> Sent: Fri, June 3, 2011 11:21:52 AM
> Subject: RE: Solr performance tuning - disk i/o?
>
> Thanks to you and Otis for the suggestions! Some more information:
&g
ickerick...@gmail.com]
> Sent: Friday, June 03, 2011 9:41 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr performance tuning - disk i/o?
>
> This doesn't seem right. Here's a couple of things to try:
> 1> attach &debugQuery=on to your long-running queries
This doesn't seem right. Here's a couple of things to try:
1> attach &debugQuery=on to your long-running queries. The QTime returned
is the time taken to search, NOT including the time to load the
docs. That'll
help pinpoint whether the problem is the search itself, or assembling the
gt; To: "solr-user@lucene.apache.org"
> Sent: Fri, June 3, 2011 8:44:33 AM
> Subject: Solr performance tuning - disk i/o?
>
> Hello,
>
> I'm trying to move a VuFind installation from an ailing physical server into
> a
>virtualized environment, and I'
Hello,
I'm trying to move a VuFind installation from an ailing physical server into a
virtualized environment, and I'm running into performance problems. VuFind is
a Solr 1.4.1-based application with fairly large and complex records (many
stored fields, many words per record). My particular i
message in context:
http://lucene.472066.n3.nabble.com/maxMergeDocs-and-performance-tuning-tp1162695p1183064.html
Sent from the Solr - User mailing list archive at Nabble.com.
/maxMergeDocs-and-performance-tuning-tp1162695p1168480.html
Sent from the Solr - User mailing list archive at Nabble.com.
the wiki.
Thanks!
Andrew.
--
View this message in context:
http://lucene.472066.n3.nabble.com/maxMergeDocs-and-performance-tuning-tp1162695p1162695.html
Sent from the Solr - User mailing list archive at Nabble.com.
I was lucky to contribute an excellent solution:
http://issues.apache.org/jira/browse/LUCENE-2230
Even 2nd edition of Lucene in Action advocates to use fuzzy search only in
exceptional cases.
Another solution would be 2-step indexing (it may work for many use cases),
but it is not "spellchecker
http://issues.apache.org/jira/browse/LUCENE-2230
Enjoy!
> -Original Message-
> From: Fuad Efendi [mailto:f...@efendi.ca]
> Sent: January-19-10 11:32 PM
> To: solr-user@lucene.apache.org
> Subject: SOLR Performance Tuning: Fuzzy Searches, Distance, BK-Tree
>
> Hi,
&
Hi,
I am wondering: will SOLR or Lucene use caches for fuzzy searches? I mean
per-term caching or something, internal to Lucene, or may be SOLR (SOLR may
use own query parser)...
Anyway, I implemented BK-Tree and playing with it right now, I altered
FuzzyTermEnum class of Lucene...
http://en.wik
Si si, that issue.
Otis
--
Sematext -- http://sematext.com/ -- Solr - Lucene - Nutch
- Original Message
> From: Peter Wolanin
> To: solr-user@lucene.apache.org
> Sent: Thu, January 7, 2010 9:27:04 PM
> Subject: Re: SOLR Performance Tuning: Pagination
>
> Great -
un, January 3, 2010 3:37:01 PM
>> Subject: Re: SOLR Performance Tuning: Pagination
>>
>> At the NOVA Apache Lucene/Solr Meetup last May, one of the speakers
>> from Near Infinity (Aaron McCurry I think) mentioned that he had a
>> patch for lucene that enabled unlimi
nin
> To: solr-user@lucene.apache.org
> Sent: Sun, January 3, 2010 3:37:01 PM
> Subject: Re: SOLR Performance Tuning: Pagination
>
> At the NOVA Apache Lucene/Solr Meetup last May, one of the speakers
> from Near Infinity (Aaron McCurry I think) mentioned that he had a
> patch
At the NOVA Apache Lucene/Solr Meetup last May, one of the speakers
from Near Infinity (Aaron McCurry I think) mentioned that he had a
patch for lucene that enabled unlimited depth memory-efficient paging.
Is anyone in contact with him?
-Peter
On Thu, Dec 24, 2009 at 11:27 AM, Grant Ingersoll w
On Dec 24, 2009, at 1:51 PM, Walter Underwood wrote:
> Some bots will do that, too. Maybe badly written ones, but we saw that at
> Netflix. It was causing search timeouts just before a peak traffic period, so
> we set a page limit in the front end, something like 200 pages.
>
> It makes sense
). But some queries may return huge nuber of documents
(better is to tune "stop-word" list)
-Fuad
> -Original Message-
> From: Walter Underwood [mailto:wun...@wunderwood.org]
> Sent: December-24-09 1:51 PM
> To: solr-user@lucene.apache.org
> Subject: Re: SOLR Perfo
.
> -Original Message-
> From: Walter Underwood [mailto:wun...@wunderwood.org]
> Sent: December-24-09 11:37 AM
> To: solr-user@lucene.apache.org
> Subject: Re: SOLR Performance Tuning: Pagination
>
> When do users do a query like that? --wunder
>
> On Dec
olr call that is
applied to the relevance before sorting.
[It also made me jump through hoops when I wrote some unit tests for the
indexing.]
-Original Message-
From: Walter Underwood [mailto:wun...@wunderwood.org]
Sent: December-24-09 1:51 PM
To: solr-user@lucene.apache.org
Subj
; To: solr-user@lucene.apache.org
> Subject: Re: SOLR Performance Tuning: Pagination
>
> Some bots will do that, too. Maybe badly written ones, but we saw that at
> Netflix. It was causing search timeouts just before a peak traffic period,
> so we set a page limit in the front end, something
Some bots will do that, too. Maybe badly written ones, but we saw that at
Netflix. It was causing search timeouts just before a peak traffic period, so
we set a page limit in the front end, something like 200 pages.
It makes sense for that to be very slow, because a request for hit 28838540
mea
On Dec 24, 2009, at 11:36 AM, Walter Underwood wrote:
When do users do a query like that? --wunder
Well, SolrEntityProcessor "users" do :)
http://issues.apache.org/jira/browse/SOLR-1499
(which by the way I plan on polishing and committing over the
holidays)
Erik
On Dec 24
fwiw, when implementing distributed search i ran into a similar
problem, but then i noticed even google doesnt let you go past page
1000, easier to just set a limit on start
On Thu, Dec 24, 2009 at 8:36 AM, Walter Underwood wrote:
> When do users do a query like that? --wunder
>
> On Dec 24, 200
When do users do a query like that? --wunder
On Dec 24, 2009, at 8:09 AM, Fuad Efendi wrote:
> I used pagination for a while till found this...
>
>
> I have filtered query ID:[* TO *] returning 20 millions results (no
> faceting), and pagination always seemed to be fast. However, fast only with
On Dec 24, 2009, at 11:09 AM, Fuad Efendi wrote:
> I used pagination for a while till found this...
>
>
> I have filtered query ID:[* TO *] returning 20 millions results (no
> faceting), and pagination always seemed to be fast. However, fast only with
> low values for start=12345. Queries like
I used pagination for a while till found this...
I have filtered query ID:[* TO *] returning 20 millions results (no
faceting), and pagination always seemed to be fast. However, fast only with
low values for start=12345. Queries like start=28838540 take 40-60 seconds,
and even cause OutOfMemoryEx
> Can you quickly explain what you did to disable INFO-Level?
>
> I am from a PHP background and am not so well versed in Tomcat or
> Java. Is this a section in solrconfig.xml or did you have to edit
> Solr Java source and recompile?
1. Create a file called logging.properties with following con
Hi
Can you quickly explain what you did to disable INFO-Level?
I am from a PHP background and am not so well versed in Tomcat or
Java. Is this a section in solrconfig.xml or did you have to edit
Solr Java source and recompile?
Thanks In Advance
Andrew
2009/12/20 Fuad Efendi :
> After research
q,rsp);
setResponseHeaderValues(handler,req,rsp);
StringBuilder sb = new StringBuilder();
for (int i=0; i -Original Message-
> From: Fuad Efendi [mailto:f...@efendi.ca]
> Sent: December-20-09 2:54 PM
> To: solr-user@lucene.apache.org
> Subject: SOLR Performance Tuning
After researching how to configure default SOLR & Tomcat logging, I finally
disabled INFO-level for SOLR.
And performance improved at least 7 times!!! ('at least 7' because I
restarted server 5 minutes ago; caches are not prepopulated yet)
Before that, I had 300-600 ms in HTTPD log files in avera
g...@gmail.com]
Sent: August-17-09 1:45 PM
To: solr-user@lucene.apache.org
Subject: Re: Performance Tuning: segment_merge:index_update=5:1 (timing)
Fuad,
I'd recommend indexing in Hadoop, then copying the new indexes to Solr
slaves. This removes the need for Solr master servers. Of cour
Fuad,
I'd recommend indexing in Hadoop, then copying the new indexes to Solr
slaves. This removes the need for Solr master servers. Of course
you'd need a Hadoop cluster larger than the number of master servers
you have now. The merge indexes command (which can be taxing on the
servers because
-09 4:20 PM
To: solr-user@lucene.apache.org
Subject: Re: Performance Tuning: segment_merge:index_update=5:1 (timing)
BTW, what version of Solr are you on?
On Aug 13, 2009, at 1:43 PM, Fuad Efendi wrote:
> UPDATE:
>
> I have 100,000,000 new documents in 24 hours, including possible
> updat
BTW, what version of Solr are you on?
On Aug 13, 2009, at 1:43 PM, Fuad Efendi wrote:
UPDATE:
I have 100,000,000 new documents in 24 hours, including possible
updates OR
possibly adding same document several times. I have two segments now
(30Gb
total), and network is overloaded (I use web
UPDATE:
I have 100,000,000 new documents in 24 hours, including possible updates OR
possibly adding same document several times. I have two segments now (30Gb
total), and network is overloaded (I use web crawler to generate documents).
I never had more than 25,000,000 within a month before...
I r
gh
only "timestamp" field changes in existing "refreshed" document)
-Original Message-
From: Grant Ingersoll
Sent: August-11-09 9:52 PM
To: solr-user@lucene.apache.org
Subject: Re: Performance Tuning: segment_merge:index_update=5:1 (timing)
Is there a tim
Is there a time of day you could schedule merges? See
http://www.lucidimagination.com/search/document/bd53b0431f7eada5/concurrentmergescheduler_and_mergepolicy_question
Or, you might be able to implement a scheduler that only merges the
small segments, and then does the larger ones at slow ti
Forgot to add: committing only once a day
I tried mergeFactor=1000 and performance of index write was extremely good
(more than 50,000,000 updates during part of a day)
However, "commit" was taking 2 days or more and I simply killed process
(suspecting that it can break my harddrive); I had about
Never tried profiling;
3000-5000 docs per second if SOLR is not busy with segment merge;
During segment merge 99% CPU, no disk swap; I can't suspect I/O...
During document updates (small batches 100-1000 docs) only 5-15% CPU
-server 2048Gb option of JVM (which is JRockit) + 256M for RAM Buffer
Have you tried profiling? How often are you committing? Have you
looked at Garbage Collection or any of the usual suspects like that?
On Aug 11, 2009, at 4:49 PM, Fuad Efendi wrote:
In a heavily loaded Write-only Master SOLR, I have 5 minutes of RAM
Buffer
Flash / Segment Merge per 1 min
In a heavily loaded Write-only Master SOLR, I have 5 minutes of RAM Buffer
Flash / Segment Merge per 1 minute of (heavy) batch document updates.
I am using mergeFactor=100 etc (I already posted message...)
So that... I can't see hardware is a problem: with more CPU and faster
RAID-0 I'll get the
Thank you! Those are set to zero for me too (at the moment!) so I guess it's
good news.
-Original Message-
From: Mike Klaas [mailto:[EMAIL PROTECTED]
Sent: 11 January 2007 23:13
To: solr-user@lucene.apache.org
Subject: Re: Performance tuning
On 1/11/07, Stephanie Belton &l
On 1/11/07, Stephanie Belton <[EMAIL PROTECTED]> wrote:
Thanks for that. I am sorry this isn't really Solr-related but how can I
monitor the swapping if I can't rely on the output of the free command?
$ vmstat -S M 3
procs ---memory-- ---swap-- -io --system-- cpu
On 1/11/07 2:33 PM, "Yonik Seeley" <[EMAIL PROTECTED]> wrote:
> On 1/11/07, Stephanie Belton <[EMAIL PROTECTED]> wrote:
>> The reason I am keeping a close eye on resource usage is that our traffic is
>> increasing by around 20% every month (currently over 400,000 page
>> impressions/day although n
On 1/11/07, Stephanie Belton <[EMAIL PROTECTED]> wrote:
The reason I am keeping a close eye on resource usage is that our traffic is
increasing by around 20% every month (currently over 400,000 page
impressions/day although not all of them are search queries!) and I want to
make sure we tackle an
rather keep load balancing as a last resort due to cost implications.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik Seeley
Sent: 11 January 2007 22:02
To: solr-user@lucene.apache.org
Subject: Re: Performance tuning
On 1/11/07, Stephanie Belton
oing
through the performance tuning advice on the wiki?
Unfortunately, I think that's pretty old stuff.
People are normally concerned with:
- the number of requests per second they can handle with their server
- the average latency of requests (or median, 99 percentile, etc)
A goal of reducing CPU
Thanks for that. I am sorry this isn't really Solr-related but how can I
monitor the swapping if I can't rely on the output of the free command?
Do you think I could still achieve any significant improvements by going
through the performance tuning advice on the wiki?
-Origin
age was peaking at 20% and
memory around 28% no swapping.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik Seeley
Sent: 11 January 2007 15:12
To: solr-user@lucene.apache.org
Subject: Re: Performance tuning
On 1/11/07, Stephanie Belton <[EMAIL PROTECT
Thanks for sending this link, I seem to have missed that on the wiki!
-Original Message-
From: Thorsten Scherler [mailto:[EMAIL PROTECTED]
Sent: 11 January 2007 15:06
To: solr-user@lucene.apache.org
Subject: Re: Performance tuning
On Thu, 2007-01-11 at 14:57 +, Stephanie Belton
it's been
running for 1 year and up until now the CPU usage was peaking at 20% and
memory around 28% no swapping.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik Seeley
Sent: 11 January 2007 15:12
To: solr-user@lucene.apache.org
Subject: Re: Per
On 1/11/07, Stephanie Belton <[EMAIL PROTECTED]> wrote:
Solr is now up and running on our production environment and working great.
However it is taking up a lot of extra CPU and memory (CPU usage has doubled
and memory is swapping). Is there any documentation on performance tuning?
any documentation on performance tuning?
> There seems to be a lot of useful info in the server output but I don’t
> understand it.
>
>
>
> E.g.
> filterCache{lookups=0,hits=0,hitratio=0.00,inserts=537,evictions=0,size=337,cumulative_lookups=4723,cumulative_hits=3708,c
83 matches
Mail list logo