Hi All,
We have solr 6.3.0 solr cloud running stable. We use an auto complete
search functionality where the search is being done after 3 key strokes.
Recently we ran into an issue (very slow response from solr) when one of
the user hit an auto complete search by typing initial letters of week
t; Sent: Tuesday, March 19, 2019 3:29:17 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr index slow response
>
> Indexing is CPU bound. If you have enough RAM, SSD disks, and enough client
> threads, you should be able to drive CPU to over 90%.
>
> Start with two cl
time.
I will try with Solr Could cluster, maybe get better speed there.
//Aaron
From: Walter Underwood
Sent: Tuesday, March 19, 2019 3:29:17 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr index slow response
Indexing is CPU bound. If you have enough RAM
uot;Request time" is what I measured from client side,
>> it's almost the same value as QTime, just some milliseconds difference. I
>> could provide tcpdump to prove that it is really solr slow response.
>>
>> Those long response time is not really spikes, it
er is running on a quit powerful server, 32 cpus, 400GB RAM,
> while 300 GB is reserved for solr, while this happening, cpu usage is
> around 30%, mem usage is 34%. io also look ok according to iotop. SSD disk.
> >>>
> >>>
> >>> Our application sen
GB RAM, while
>>> 300 GB is reserved for solr, while this happening, cpu usage is around 30%,
>>> mem usage is 34%. io also look ok according to iotop. SSD disk.
>>>
>>>
>>> Our application send 100 documents to solr per request, j
some
times could be 300 seconds, the slow response happens very often.
"Soft AutoCommit: disabled", "Hard AutoCommit: if uncommited for 360ms; if
100 uncommited docs"
There are around 100 clients sending those documents at the same time, but each
for the client is
ding to iotop. SSD disk.
>
>
> Our application send 100 documents to solr per request, json encoded. the
> size is around 5M each time. some times the response time is under 1 seconds,
> some times could be 300 seconds, the slow response happens very often.
>
>
> &quo
___
> From: Emir Arnautović
> Sent: Tuesday, March 19, 2019 1:00:19 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr index slow response
>
> If you start indexing with just a single thread/client, do you still see
> slow bulks?
>
> Emir
&g
Design Considerations. indexed fields. The number of indexed fields
greatly increases the following: Memory usage during indexing ; Segment merge
time
From: Emir Arnautović
Sent: Tuesday, March 19, 2019 1:00:19 PM
To: solr-user@lucene.apache.org
Subject: Re: Sol
> "QTime" value is from the solr rest api response, extracted from the
> http/json payload. The "Request time" is what I measured from client side,
> it's almost the same value as QTime, just some milliseconds difference. I
> could provide tcpdump to prove t
"QTime" value is from the solr rest api response, extracted from the http/json
payload. The "Request time" is what I measured from client side, it's almost
the same value as QTime, just some milliseconds difference. I could provide
tcpdump to prove that it is really s
http://sematext.com/
> On 19 Mar 2019, at 09:17, Aaron Yingcai Sun wrote:
>
> We have around 80 million documents to index, total index size around 3TB, I
> guess I'm not the first one to work with this big amount of data. with such
> slow response time, the index process would ta
We have around 80 million documents to index, total index size around 3TB, I
guess I'm not the first one to work with this big amount of data. with such
slow response time, the index process would take around 2 weeks. While the
system resource is not very loaded, there must be a way to
; Hello, Chris
>
>
> Thanks for the tips. So I tried to set it as you suggested, not see too much
> improvement. Since I don't need it visible immediately, softCommit is
> disabled totally.
>
> The slow response is happening every few seconds, if it happens hourly I
Hello, Chris
Thanks for the tips. So I tried to set it as you suggested, not see too much
improvement. Since I don't need it visible immediately, softCommit is disabled
totally.
The slow response is happening every few seconds, if it happens hourly I would
suspect the hourly auto-c
__
> From: Emir Arnautović
> Sent: Monday, March 18, 2019 2:19:19 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr index slow response
>
> Hi Aaron,
> Without looking too much into numbers, my bet would be that it is large
> heap that is causing issues.
___
> From: Emir Arnautović
> Sent: Monday, March 18, 2019 2:19:19 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Solr index slow response
>
> Hi Aaron,
> Without looking too much into numbers, my bet would be that it is large heap
> that is causi
142655.210-160208 DBG1:doc_count: 10 , doc_size: 605 KB, Res code:
> 200, QTime: 108 ms, Request time: 110 ms.
> 190318-142655.304-160208 DBG1:doc_count: 10 , doc_size: 481 KB, Res code:
> 200, QTime: 89 ms, Request time: 90 ms.
> 190318-142655.410-160208 DBG1:doc_count: 10 , doc_size: 4
time.
BRs
//Aaron
From: Emir Arnautović
Sent: Monday, March 18, 2019 2:19:19 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr index slow response
Hi Aaron,
Without looking too much into numbers, my bet would be that it is large heap
that is causing issues
olr-user@lucene.apache.org
Subject: Re: Solr index slow response
On Mon, 2019-03-18 at 10:47 +, Aaron Yingcai Sun wrote:
> Solr server is running on a quit powerful server, 32 cpus, 400GB RAM,
> while 300 GB is reserved for solr, [...]
300GB for Solr sounds excessive.
> Our applica
.0_144/jre/classes",
>> "classpath":"...",
>> "commandLineArgs":["-Xms100G",
>> "-Xmx300G",
>> "-DSTOP.PORT=8079",
>> "-DSTOP.KEY=..",
>> "-Dsolr.solr.home=.."
On 18 Mar 2019, at 13:14, Aaron Yingcai Sun wrote:
>
> Hello, Emir,
>
> Thanks for the reply, this is the solr version and heap info, standalone
> single solr server. I don't have monitor tool connected. only look at 'top',
> has not seen cpu spike so far, w
the size is around 5M each time. some times the response time is
> under 1 seconds, some times could be 300 seconds, the slow response
> happens very often.
> ...
> There are around 100 clients sending those documents at the same
> time, but each for the client is blocking call which wa
Hello, Emir,
Thanks for the reply, this is the solr version and heap info, standalone single
solr server. I don't have monitor tool connected. only look at 'top', has not
seen cpu spike so far, when the slow response happens, cpu usage is not high at
all, around 30%.
# curl &
documents to solr per request, json encoded. the
> size is around 5M each time. some times the response time is under 1 seconds,
> some times could be 300 seconds, the slow response happens very often.
>
>
> "Soft AutoCommit: disabled", "Hard AutoCommit: if uncomm
is around 30%, mem
usage is 34%. io also look ok according to iotop. SSD disk.
Our application send 100 documents to solr per request, json encoded. the size
is around 5M each time. some times the response time is under 1 seconds, some
times could be 300 seconds, the slow response happens very
Are you getting errors in Jmeter?
On Wed, 24 Oct 2018, 21:49 Amjad Khan, wrote:
> Hi,
>
> We recently moved to Solr Cloud (Google) with 4 nodes and have very
> limited number of data.
>
> We are facing very weird issue here, solr cluster response time for query
> is high when we have less number
If your cache is 2048 entries, then every one of those 1600 queries is in cache.
Our logs typically have about a million lines, with distinct queries
distributed according to the Zipf law. Some common queries, a long tail, that
sort of thing.
wunder
Walter Underwood
wun...@wunderwood.org
http:/
But a zero size cache doesn’t give realistic benchmarks. It makes things slower
than they will be in production.
We do this:
1. Collect production logs.
2. Split the logs into a warming log and and a benchmark log. The warming log
should be at least as large as the query result cache.
3. Run th
Thanks Erick,
But do you think that disabling the cache will increase the response time
instead of solving the problem here.
> On Oct 24, 2018, at 12:52 PM, Erick Erickson wrote:
>
> queryResultCache
Thanks Wunder for this prompt response.
We are testing with 1600 different text to search with Jmeter and that keeps
running continuously, and keep running continuously means cache has been built
and there should be better response now. Doesn’t it?
Thanks
> On Oct 24, 2018, at 12:20 PM, Walt
You can set your queryResultCache and filterCache "size" parameter to
zero in solrconfig.xml to disable those caches.
On Wed, Oct 24, 2018 at 9:21 AM Walter Underwood wrote:
>
> Are you testing with a small number of queries? If your cache is larger than
> the number of queries in your benchmark,
Are you testing with a small number of queries? If your cache is larger than
the number of queries in your benchmark, the first round will load the cache,
then everything will be super fast.
Load testing a system with caches is hard.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer
Hi,
We recently moved to Solr Cloud (Google) with 4 nodes and have very limited
number of data.
We are facing very weird issue here, solr cluster response time for query is
high when we have less number of hit and the moment we run our test to hit the
solr cluster hard we see better response i
Thanks Erick and Kshitij.
Would try both the options and see what works best.
Regards
Ankush Khanna
On Fri, 9 Sep 2016 at 16:33 Erick Erickson wrote:
> The soft commit interval governs opening new
> searchers, which should be "warmed" in order
> load up caches. Mu guess is that you're not doing
The soft commit interval governs opening new
searchers, which should be "warmed" in order
load up caches. Mu guess is that you're not doing much
warming and thus seeing long search times.
Most attachments are stripped by the mail server,
if you want people to see the images put them up somewhere
a
Hi Ankush,
As you are updating highly on one of the cores, hard commit will play a
major role.
Reason: During hard commits solr merges your segments and this is a time
taking process.
During merging of segments indexing of documents gets affected i.e. gets
slower.
Try figuring out the right num
Hello,
We are running some test for improving our solr performance.
We have around 15 collections on our solr cluster.
But we are particularly interested in one collection holding high amount of
documents. (
https://gist.github.com/AnkushKhanna/9a472bccc02d9859fce07cb0204862da)
Issue:
We see tha
e field
> (float)
> definition to achieve optimal performance?
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Randomly-slow-response-times-for-range-queries-tp1441724p1443096.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
Also, does anyone know the best precisionStep to use on a trie field (float)
definition to achieve optimal performance?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Randomly-slow-response-times-for-range-queries-tp1441724p1443096.html
Sent from the Solr - User mailing
ge in context:
http://lucene.472066.n3.nabble.com/Randomly-slow-response-times-for-range-queries-tp1441724p1443086.html
Sent from the Solr - User mailing list archive at Nabble.com.
e tried this using the new trie fields, and using standar
> sdouble fields, and have had similar results. Is there a known issue with
> randomly slow queries when doing range searches with Solr?
>
> Thanks for any support you can offer.
> --
> View this message in context:
> htt
standar
sdouble fields, and have had similar results. Is there a known issue with
randomly slow queries when doing range searches with Solr?
Thanks for any support you can offer.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Randomly-slow-response-times-for-range
gwk, thanks a lot.
Elaine
On Wed, Sep 9, 2009 at 11:14 AM, gwk wrote:
> Hi Elaine,
>
> You can page your resultset with the rows and start parameters
> (http://wiki.apache.org/solr/CommonQueryParameters). So for example to get
> the first 100 results one would use the parameters rows=100&start=0
Hi Elaine,
You can page your resultset with the rows and start parameters
(http://wiki.apache.org/solr/CommonQueryParameters). So for example to
get the first 100 results one would use the parameters rows=100&start=0
and the second 100 results with rows=100&start=100 etc. etc.
Regards,
gwk
gwk,
Sorry for confusion. I am doing simple phrase search among the
sentences which could be in english or other language. Each doc has
only several id numbers and the sentence itself.
I did not know about paging. Sounds like it is what I need. How to
achieve paging from solr?
I also need to sto
Hi Elaine,
I think you need to provide us with some more information on what
exactly you are trying to achieve. From your question I also assumed you
wanted paging (getting the first 10 results, than the next 10 etc.) But
reading it again, "slice my docs into pieces" I now think you might've
Please, take a look at
http://issues.apache.org/jira/browse/SOLR-1379
Alex.
On Wed, Sep 9, 2009 at 5:28 PM, Constantijn Visinescu wrote:
> Just wondering, is there an easy way to load the whole index into ram?
>
> On Wed, Sep 9, 2009 at 4:22 PM, Alex Baranov >wrote:
>
> > There is a good artic
I want to get the 10K results, not just the top 10.
The fields are regular language sentences, they are not large.
Is clustering the technique for what I am doing?
On Wed, Sep 9, 2009 at 10:16 AM, Grant Ingersoll wrote:
> Do you need 10K results at a time or are you just getting the top 10 or so
Just wondering, is there an easy way to load the whole index into ram?
On Wed, Sep 9, 2009 at 4:22 PM, Alex Baranov wrote:
> There is a good article on how to scale the Lucene/Solr solution:
>
>
> http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Scaling-Lucene-and-Solr
>
>
There is a good article on how to scale the Lucene/Solr solution:
http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Scaling-Lucene-and-Solr
Also, if you have heavy load on the server (large amount of concurrent
requests) then I'd suggest to consider loading the index into R
Do you need 10K results at a time or are you just getting the top 10
or so in a set of 10K? Also, are you retrieving really large stored
fields? If you add &debugQuery=true to your request, Solr will return
timing information for the various components.
On Sep 9, 2009, at 10:10 AM, Elain
Hi,
I have 20 million docs on solr. If my query would return more than
10,000 results, the response time will be very very long. How to
resolve such problem? Can I slice my docs into pieces and let the
query operate within one piece at a time so the response time and
response data will be more man
Can you try with the latest nightly build?
That may help pinpoint if it's index file locking contention, or OS
disk cache misses when reading the index. If the time never recovers,
it suggests the former.
-Yonik
On Mon, Dec 15, 2008 at 5:14 PM, Sammy Yu wrote:
> Hi guys,
> I have a typical ma
Hi guys,
I have a typical master/slave setup running with Solr 1.3.0. I did
some basic scalability test with JMeter and tweaked our environment
and determined that we can handle approximately 26 simultaneous
threads and get end-to-end response times of under 200ms even with
typically every 5 mi
On 31-Jan-08, at 9:41 AM, Andy Blower wrote:
Yonik Seeley wrote:
This surprises me because the filter query submitted has usually
already
been submitted along with a normal query, and so should be cached
in the
filter cache. Surely all solr needs to do is return a handful of
fields
for
t;
> -Yonik
>
>
Sorry, that flew over my head..
Thanks very much for your help. I wish I had more time during this
evaluation to delve into the code. I don't suppose there's a document with
guided tour of the codebase anywhere is there? ;-)
P.S. I re-ran my tests without retur
; processed, everything grinds to a halt and the responses to these blank
>>> queries can take up to 125secs to be returned!
>>>
>>> This surprises me because the filter query submitted has usually already
>>> been submitted along with a normal query, and so
r
>> the first 100 records in the list from the cache - or so I thought.
>>
>> Can anyone tell me what might be causing this dramatic slowdown? Any
>> suggestions for solutions would be gratefully received.
>>
>>
>> Thans
>> Andy.
>> --
>> View this
On Jan 31, 2008 10:43 AM, Andy Blower <[EMAIL PROTECTED]> wrote:
>
> I'm evaluating SOLR/Lucene for our needs and currently looking at performance
> since 99% of the functionality we're looking for is provided. The index
> contains 18.4 Million records and is 58Gb in size. Most queries are
> accept
100 records in the list from the cache - or so I thought.
>
> Can anyone tell me what might be causing this dramatic slowdown? Any
> suggestions for solutions would be gratefully received.
>
>
> Thans
> Andy.
> --
> View this message in context:
> http://www.nabble.com/Slow-response-times-using-*%3A*-tp15206563p15206563.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
>
--
Regards,
Shalin Shekhar Mangar.
efully received.
Thans
Andy.
--
View this message in context:
http://www.nabble.com/Slow-response-times-using-*%3A*-tp15206563p15206563.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 14-Sep-07, at 3:38 PM, Tom Hill wrote:
Hi Mike,
Thanks for clarifying what has been a bit of a black box to me.
A couple of questions, to increase my understanding, if you don't
mind.
If I am only using fields with multiValued="false", with a type of
"string"
or "integer" (untokenize
Hi Mike,
Thanks for clarifying what has been a bit of a black box to me.
A couple of questions, to increase my understanding, if you don't mind.
If I am only using fields with multiValued="false", with a type of "string"
or "integer" (untokenized), does solr automatically use approach 2? Or is
On 6-Sep-07, at 3:25 PM, Mike Klaas wrote:
There are essentially two facet computation strategies:
1. cached bitsets: a bitset for each term is generated and
intersected with the query restul bitset. This is more general and
performs well up to a few thousand terms.
2. field enumeratio
On 6-Sep-07, at 3:16 PM, Aaron Hammond wrote:
Thank-you for your response, this does shed some light on the subject.
Our basic question was why were we seeing slower responses the smaller
our result set got.
Currently we are searching about 1.2 million documents with the source
document about 2
should be less.
Thanks again for all your help.
Aaron
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik
Seeley
Sent: Thursday, September 06, 2007 4:17 PM
To: solr-user@lucene.apache.org
Subject: Re: Slow response
On 9/6/07, Aaron Hammond <[EMAIL
On 9/6/07, Aaron Hammond <[EMAIL PROTECTED]> wrote:
> I am pretty new to Solr and this is my first post to this list so please
> forgive me if I make any glaring errors.
>
> Here's my problem. When I do a search using the Solr admin interface for
> a term that I know does not exist in my index the
I am pretty new to Solr and this is my first post to this list so please
forgive me if I make any glaring errors.
Here's my problem. When I do a search using the Solr admin interface for
a term that I know does not exist in my index the QTime is about 1ms.
However, if I add facets to the searc
70 matches
Mail list logo