Are you getting errors in Jmeter?
On Wed, 24 Oct 2018, 21:49 Amjad Khan, wrote:
> Hi,
>
> We recently moved to Solr Cloud (Google) with 4 nodes and have very
> limited number of data.
>
> We are facing very weird issue here, solr cluster response time for query
> is high when we have less number
If your cache is 2048 entries, then every one of those 1600 queries is in cache.
Our logs typically have about a million lines, with distinct queries
distributed according to the Zipf law. Some common queries, a long tail, that
sort of thing.
wunder
Walter Underwood
wun...@wunderwood.org
http:/
But a zero size cache doesn’t give realistic benchmarks. It makes things slower
than they will be in production.
We do this:
1. Collect production logs.
2. Split the logs into a warming log and and a benchmark log. The warming log
should be at least as large as the query result cache.
3. Run th
Thanks Erick,
But do you think that disabling the cache will increase the response time
instead of solving the problem here.
> On Oct 24, 2018, at 12:52 PM, Erick Erickson wrote:
>
> queryResultCache
Thanks Wunder for this prompt response.
We are testing with 1600 different text to search with Jmeter and that keeps
running continuously, and keep running continuously means cache has been built
and there should be better response now. Doesn’t it?
Thanks
> On Oct 24, 2018, at 12:20 PM, Walt
You can set your queryResultCache and filterCache "size" parameter to
zero in solrconfig.xml to disable those caches.
On Wed, Oct 24, 2018 at 9:21 AM Walter Underwood wrote:
>
> Are you testing with a small number of queries? If your cache is larger than
> the number of queries in your benchmark,
Are you testing with a small number of queries? If your cache is larger than
the number of queries in your benchmark, the first round will load the cache,
then everything will be super fast.
Load testing a system with caches is hard.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer
gwk, thanks a lot.
Elaine
On Wed, Sep 9, 2009 at 11:14 AM, gwk wrote:
> Hi Elaine,
>
> You can page your resultset with the rows and start parameters
> (http://wiki.apache.org/solr/CommonQueryParameters). So for example to get
> the first 100 results one would use the parameters rows=100&start=0
Hi Elaine,
You can page your resultset with the rows and start parameters
(http://wiki.apache.org/solr/CommonQueryParameters). So for example to
get the first 100 results one would use the parameters rows=100&start=0
and the second 100 results with rows=100&start=100 etc. etc.
Regards,
gwk
gwk,
Sorry for confusion. I am doing simple phrase search among the
sentences which could be in english or other language. Each doc has
only several id numbers and the sentence itself.
I did not know about paging. Sounds like it is what I need. How to
achieve paging from solr?
I also need to sto
Hi Elaine,
I think you need to provide us with some more information on what
exactly you are trying to achieve. From your question I also assumed you
wanted paging (getting the first 10 results, than the next 10 etc.) But
reading it again, "slice my docs into pieces" I now think you might've
Please, take a look at
http://issues.apache.org/jira/browse/SOLR-1379
Alex.
On Wed, Sep 9, 2009 at 5:28 PM, Constantijn Visinescu wrote:
> Just wondering, is there an easy way to load the whole index into ram?
>
> On Wed, Sep 9, 2009 at 4:22 PM, Alex Baranov >wrote:
>
> > There is a good artic
I want to get the 10K results, not just the top 10.
The fields are regular language sentences, they are not large.
Is clustering the technique for what I am doing?
On Wed, Sep 9, 2009 at 10:16 AM, Grant Ingersoll wrote:
> Do you need 10K results at a time or are you just getting the top 10 or so
Just wondering, is there an easy way to load the whole index into ram?
On Wed, Sep 9, 2009 at 4:22 PM, Alex Baranov wrote:
> There is a good article on how to scale the Lucene/Solr solution:
>
>
> http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Scaling-Lucene-and-Solr
>
>
There is a good article on how to scale the Lucene/Solr solution:
http://www.lucidimagination.com/Community/Hear-from-the-Experts/Articles/Scaling-Lucene-and-Solr
Also, if you have heavy load on the server (large amount of concurrent
requests) then I'd suggest to consider loading the index into R
Do you need 10K results at a time or are you just getting the top 10
or so in a set of 10K? Also, are you retrieving really large stored
fields? If you add &debugQuery=true to your request, Solr will return
timing information for the various components.
On Sep 9, 2009, at 10:10 AM, Elain
Can you try with the latest nightly build?
That may help pinpoint if it's index file locking contention, or OS
disk cache misses when reading the index. If the time never recovers,
it suggests the former.
-Yonik
On Mon, Dec 15, 2008 at 5:14 PM, Sammy Yu wrote:
> Hi guys,
> I have a typical ma
On 31-Jan-08, at 9:41 AM, Andy Blower wrote:
Yonik Seeley wrote:
This surprises me because the filter query submitted has usually
already
been submitted along with a normal query, and so should be cached
in the
filter cache. Surely all solr needs to do is return a handful of
fields
for
Yonik Seeley wrote:
>
> *:* maps to MatchAllDocsQuery, which for each document needs to check
> if it's deleted (that's a synchronized call, and can be a bottleneck).
>
Why does this need to check if documents are deleted if normal queries
don't? Is there any way of disabling this since I can
How often does the index change? Can you use an HTTP cache and do this
once for each new index?
wunder
On 1/31/08 9:09 AM, "Andy Blower" <[EMAIL PROTECTED]> wrote:
>
> Actually I do need all facets for a field, although I've just realised that
> the tests are limited to only 100. Ooops. So it s
Actually I do need all facets for a field, although I've just realised that
the tests are limited to only 100. Ooops. So it should be worse in
reality... erk.
Since that's what we do with our current search engine, Solr has to be able
to compete with this. The fields are a mix of non-multi, non-t
On Jan 31, 2008 10:43 AM, Andy Blower <[EMAIL PROTECTED]> wrote:
>
> I'm evaluating SOLR/Lucene for our needs and currently looking at performance
> since 99% of the functionality we're looking for is provided. The index
> contains 18.4 Million records and is 58Gb in size. Most queries are
> accept
I can't give you a definitive answer based on the data you've provided.
However, do you really need to get *all* facets? Can't you limit them with
facet.limit field? Are you planning to run multiple *:* queries with all
facets turned on a 58GB index in a live system? I don't think that's a good
ide
On 14-Sep-07, at 3:38 PM, Tom Hill wrote:
Hi Mike,
Thanks for clarifying what has been a bit of a black box to me.
A couple of questions, to increase my understanding, if you don't
mind.
If I am only using fields with multiValued="false", with a type of
"string"
or "integer" (untokenize
Hi Mike,
Thanks for clarifying what has been a bit of a black box to me.
A couple of questions, to increase my understanding, if you don't mind.
If I am only using fields with multiValued="false", with a type of "string"
or "integer" (untokenized), does solr automatically use approach 2? Or is
On 6-Sep-07, at 3:25 PM, Mike Klaas wrote:
There are essentially two facet computation strategies:
1. cached bitsets: a bitset for each term is generated and
intersected with the query restul bitset. This is more general and
performs well up to a few thousand terms.
2. field enumeratio
On 6-Sep-07, at 3:16 PM, Aaron Hammond wrote:
Thank-you for your response, this does shed some light on the subject.
Our basic question was why were we seeing slower responses the smaller
our result set got.
Currently we are searching about 1.2 million documents with the source
document about 2
should be less.
Thanks again for all your help.
Aaron
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Yonik
Seeley
Sent: Thursday, September 06, 2007 4:17 PM
To: solr-user@lucene.apache.org
Subject: Re: Slow response
On 9/6/07, Aaron Hammond <[EMAIL
On 9/6/07, Aaron Hammond <[EMAIL PROTECTED]> wrote:
> I am pretty new to Solr and this is my first post to this list so please
> forgive me if I make any glaring errors.
>
> Here's my problem. When I do a search using the Solr admin interface for
> a term that I know does not exist in my index the
29 matches
Mail list logo