Already have a Jira issue for next week. I have a script to run prod logs
against a cluster. I’ll be testing a four shard by two replica cluster with 17
million docs and very long queries. We are working on getting the 95th
percentile under one second, so we should exercise the timeAllowed featu
+Walter test it
Jeff,
How much CPU does the EC2 hypervisor use? I have heard 5% but that is for a
normal workload, and is mostly consumed during system calls or context changes.
So it is quite understandable that frequent time calls would take a bigger bite
in the AWS cloud compared to bare met
It’s presumably not a small degradation - this guy very recently suggested it’s
77% slower:
https://blog.packagecloud.io/eng/2017/03/08/system-calls-are-much-slower-on-ec2/
The other reason that blog post is interesting to me is that his benchmark
utility showed the work of entering the kernel
I remember seeing some performance impact (even when not using it) and it
was attributed to the calls to System.nanoTime. See SOLR-7875 and SOLR-7876
(fixed for 5.3 and 5.4). Those two Jiras fix the impact when timeAllowed is
not used, but I don't know if there were more changes to improve the
perf
Hmm, has anyone measured the overhead of timeAllowed? We use it all the time.
If nobody has, I’ll run a benchmark with and without it.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On May 2, 2017, at 9:52 AM, Chris Hostetter wrote:
>
>
> : I speci
: I specify a timeout on all queries,
Ah -- ok, yeah -- you mean using "timeAllowed" correct?
If the root issue you were seeing is in fact clocksource related,
then using timeAllowed would probably be a significant compounding
factor there since it would involve a lot of time checks in a s
Yes, that’s the Xenial I tried. Ubuntu 16.04.2 LTS.
On 5/1/17, 7:22 PM, "Will Martin" wrote:
Ubuntu 16.04 LTS - Xenial (HVM)
Is this your Xenial version?
On 5/1/2017 6:37 PM, Jeff Wartes wrote:
> I tried a few variations of various things before we found
I started with the same three-node 15-shard configuration I’d been used to, in
an RF1 cluster. (the index is almost 700G so this takes three r4.8xlarge’s if I
want to be entirely memory-resident) I eventually dropped down to a 1/3rd size
index on a single node (so 5 shards, 100M docs each) so I
Ubuntu 16.04 LTS - Xenial (HVM)
Is this your Xenial version?
On 5/1/2017 6:37 PM, Jeff Wartes wrote:
> I tried a few variations of various things before we found and tried that
> linux/EC2 tuning page, including:
>- EC2 instance type: r4, c4, and i3
>- Ubuntu version: Xenial and Trust
Might want to measure the single CPU performance of your EC2 instance. The last
time I checked, my MacBook was twice as fast as the EC2 instance I was using.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On May 1, 2017, at 6:24 PM, Chris Hostetter w
: tldr: Recently, I tried moving an existing solrcloud configuration from
: a local datacenter to EC2. Performance was roughly 1/10th what I’d
: expected, until I applied a bunch of linux tweaks.
How many total nodes in your cluster? How many of them running ZooKeeper?
Did you observe the hea
I tried a few variations of various things before we found and tried that
linux/EC2 tuning page, including:
- EC2 instance type: r4, c4, and i3
- Ubuntu version: Xenial and Trusty
- EBS vs local storage
- Stock openjdk vs Zulu openjdk (Recent java8 in both cases - I’m aware of
the issues
It's also very important to consider the type of EC2 instance you are
using...
We settled on the R4.2XL... The R series is labeled "High-Memory"
Which instance type did you end up using?
On Mon, May 1, 2017 at 8:22 AM, Shawn Heisey wrote:
> On 4/28/2017 10:09 AM, Jeff Wartes wrote:
> > tldr:
On 4/28/2017 10:09 AM, Jeff Wartes wrote:
> tldr: Recently, I tried moving an existing solrcloud configuration from a
> local datacenter to EC2. Performance was roughly 1/10th what I’d expected,
> until I applied a bunch of linux tweaks.
How very strange. I knew virtualization would have overhe
I’d like to think I helped a little with the metrics upgrade that got released
in 6.4, so I was already watching that and I’m aware of the resulting
performance issue.
This was 5.4 though, patched with https://github.com/whitepages/SOLR-4449 - an
index we’ve been running for some time now.
Mgan
We use Solr 6.2 in EC2 instance with Cent OS 6.2 and we don't see any
difference in performance between EC2 and in local environment.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-performance-on-EC2-linux-tp4332467p4332553.html
Sent from the Solr - User mailing list
Well, 6.4.0 had a pretty severe performance issue so if you were using
that release you might see this, 6.4.2 is the most recent 6.4 release.
But I have no clue how changing linux settings would alter that and I
sure can't square that issue with you having such different
performance between local a
17 matches
Mail list logo