So the initial JMH benchmark module is in. It’s fairly minimal to start,
even compared with what I already have done with it in older forms. I
wouldn’t go rushing to dive into it, although if you have a need or use
case or interest, I will lend a hand.
It’s a small but significant step on a path
Yeah, a Solr interpreter is a bit more of a lift, this interpreter just
handles firing off parameterized benchmarks and deals with results.
It would be nice to have a Solr interpreter as well even for this use case
though, so you can easily query the state of things after a benchmark run.
A decen
Thanks for the info!
The Zeppelin stuff in particular piques my interest: I explored a
Zeppelin/Solr integration a bit in SOLR-15080, but ultimately never
committed it because of some lukewarm feedback from David S on the PR
and some shifting personal priorities. If others are using Zeppelin
mayb
Yes, it’s a new Gradle module called benchmark.
I’ll likely commit the base early tomorrow. It’s been working through pre
commit checks.
There are currently only two benchmarks to start, but I have more that I’ll
be adding.
Once I have a reasonable number in, I’ll run some comparisons with the 8
Just clarifyng, but the "Solr Benchmark Module" you're referring to
here is your work from SOLR-15428? Or something else?
Jason
On Sun, Aug 1, 2021 at 12:16 AM Mark Miller wrote:
>
> I’m about ready to commit the first iteration of the Solr benchmark module.
>
> It is meant to target both micro
I’m about ready to commit the first iteration of the Solr benchmark module.
It is meant to target both micro and macro benchmarks, though it is
additive to, not a replacement for, Gatling and a full performance cluster.
The inner workings of Solr and SolrCloud have always been something of a
myst