Nice, you can put virtual threads in the query scatter gather and configure
jetty to use virtual threads out of the box. Low hanging nice wins.
When we looked at the issue (I’m not looking at the code), it didn’t look
like a lack of CPU problem - it looked like, by default you would get one
thread, I suppose trying to be the same as you’d get previously by default.
Except, you’d now get Lucene’s parallel segment search code path with that
That would normally be done if you want to get those params before you have
parsed the form data. At some point, there were cases of that. And it’s
likely done with an attempt to be generic and work with whatever format
since it’s essentially a feature in user code. If the current use doesn’t
need
I go into a bunch of this silliness in the presentation I put together on
the Overseer. I’ll look into sharing it.
ZkStateReader is obviously not in the Overseer class, but it’s all one
communication system; it’s de facto Overseer code and only slightly better
than the Overseer class.
A state upd
I'm not currently looking at any code, but if the idea is that you put in
that assert and ran the nightly or none nightly tests, I wouldn't come to
the conclusion that that code path is never hit unless you've walked
through all of the possible concurrency potential around it in the code.
- MRM
The benchmark module is focused on micro-benchmarks - benchmarking specific
features or specific code. These kinds of benchmarks are notoriously
difficult to get right in Java, and the benchmark module is built on JMH,
in order to make it much easier to get things right.
The module has a built-in
probably the easiest way to get input on this one is to find what seems to
be the cause of what looks like global lock behavior and then mention that
cause here.
No, you wouldn't expect replication to take turns and go one at a time, but
you're more likely to get relevant feedback if you can point
Maybe.
Yeah the song is just meant to be funny.
The long / extremely short, limited value, overview of my view is that
pretty much every classical way you can share clusterstate with Zookeeper
is an Overseer design, at least how I defined or thought of an Overseer
before it had any implementation
I would disavow any of those videos as a technical resource.
Funny enough, I was just wrapping up a small internal presentation to share
my take on the Overseer. The writing has easily been eclipsed by this song
I wrote for the outro though.
https://youtu.be/VlJUJan9WGU
Wow, I never knew meetup.com charged - and it looks like as much as a
streaming service. Wow. The only value I know of that meetup.com really
provides is discoverability and rsvp. Neither of which seems that valuable
for this.
It would be crazy to pay that monthly fee if you didn’t already. If it
There is a publish node as down and wait method that just waits until then
down states show up in the cluster state. But waiting won't do any good
until down is actually published and it still is not. I'm pretty down has
never been published on startup despite appearances. I've seen two
ramificatio
It's too bad HttpSolrServer setup this client philosophy. It's momentum was
directly opposite to what you want: a SolrClient that can optionally stream or
load balance and a SolrCloudClient that wraps it.
[Mark Miller - Chat @
Spike](https://spikenow.com/r/a/?ref=spike-organic-sig
.
If you didn’t care about adhering 100% to the spec, 206 is probably the best
abuse of the spec option there is, with maybe 202 coming in a weak second.
[Mark Miller - Chat @
Spike](https://spikenow.com/r/a/?ref=spike-organic-signature&_ts=2j87b6)
[2j87b6]
On March 20, 2024 at 21:59 GMT,
operation is likely to depend on that.
- MRM
On Tue, Feb 6, 2024 at 8:22 PM Gus Heck wrote:
> Ah, unless someone fixed it since the last time Mark Miller ranted about
> it, it's rooted in the fact that the call to create a collection returns to
> the user before the collection i
+1
Successfully smoke tested the Solr Operator v0.8.0!
On Mon, Oct 16, 2023 at 2:37 PM Jason Gerlowski
wrote:
> Please vote for release candidate 1 for the Solr Operator v0.8.0
>
> The artifacts can be downloaded from:
>
>
> https://dist.apache.org/repos/dist/dev/solr/solr-operator/solr-operato
The primary reason is as Ishan says - so that update reorders from leader
to replica can be handled in both normal and failure cases.
It’s also true that a part of the reason that the per document, NRT design,
with versions, was chosen was a desire to support per document optimistic
concurrency.
Yeah, it’s not going to fix that updates can come in too early if you just
delay when the replica publishes active. It’s still going to show up active
when it’s not. That gets rectified if you end up replicating the index,
it’s when you peer sync that it can be a persistent problem. And in both
cas
Yes, you are correct. It doesn’t really work. Depending on the distributed
mode you are running in, it may still publish the cores as down, in one of
the modes it sends a down node cmd to the Overseer which should do it based
on what cores are in the cluster state. In that case it should still
publ
Oh I’m not referring to your proposal, just what happens with the current
system and what I had done around DOWN state to address it. The problem
there is the missing replica state when you come up as the live node can’t
cover for it. How you cover for that state could be done in a lot of ways,
it’
Actually, I think what I did was move the DOWN state to startup. Since you
can’t count on it on shutdown (crash, killed process, state doesn’t get
published for a variety of reasons), it doesn’t do anything solid for the
holes where you are indexing and a node cycles. So it can come up in any
state
Yeah, I think a jira issue or two was filed for it, but I didn't see
anything user facing go in. You can do it for queries by asking the
overseer to publish a DOWN state though. It won't drop indexing leadership
until you close the core, but it will prevent the temporary slow/hotspot
you get if you
That did require some changes around live node handeling, which is why a
different approach as you suggest would also be reasonable. You still do
want to solve for the original motivation of DOWN - stopping search traffic
to the node before things start closing.
Yeah, I took the DOWN state out all together in shutdown as its problematic
and effectively sugar for the user view of the cluster state - as far as
the system goes, if the ephemeral live node is gone, that node is down,
regardless of the replica state. There is some value in being able to
remove a
ple committers. I would be hesitant
> to
> > support an official Apache release for the same without testing or
> interest
> > by the broader community.
> >
> > Towards that, can we invite community members to try it out from the
> > sandbox repo itself, and
The only real complexity around it is properly dealing with the queue in a
large scale production environment, and none of that is code complexity.
CrossDC is a critical feature for many, the problem with the previous
iteration was it tried to be the queuing system and was obviously never
going to
It has integration tests, Kafka has an embedded version for tests, just no
CI setup currently. If it comes into Solr, it will just pick up Solr’s CI.
The design can work with any queuing system, but due to the various
intricacies involved in the different queue implementations, adding support
is b
I think the main motivation would be cost savings.
The main thing I like about keeping it separate is the ability to have an
independent release cycle. I initially preferred a separation due to that.
But the cost for what it actually is, is high.
It essentially consists of two fairly simple part
Oh and the upside to actually getting this right beyond some bug prevention
and minor better behavior, is that the system can weather high load
situations dramatically better. For instance, it’s not uncommon if you are
hammering the system with indexing to start running into exceptions around
losin
And lastly, while you can give up on session loss and not just on session
expiration and in most cases that won’t be detrimental, other cases and zk
recipes can rely on the fact that only session loss and not connection loss
are a showstopper.
A little key piece that’s not super clear in there: this type of
alternative approach allows you to ensure requests against an old session
will get a session expired exception and never succeed.
Apologies is this not exactly clear, I spoke, some AI automatically turned
into text that it found to be clearer, and I pasted…
The concept behind these retries with Zookeeper is to allow for recovery of
lost connections if they happen before the session times out. It is
recommended to only fail a
If you are just looking for a “path”, you could add a rewrite rule to
jetty.xml as an equivalent I believe.
On Fri, Jul 28, 2023 at 11:13 AM Jason Gerlowski
wrote:
> > I discovered that the “hostContext”, i.e the “/solr” bit of the URL can
> actually be changed!
>
> 1. I wonder if/how this works
nder high load is going to increase the load (more
> in flight stuff) plus the usual priority inversion issue (not starvation)
> given we're reasonably not going to preempt.
>
> Ilan
>
>
> On Sun, Jul 23, 2023, 5:35 AM Mark Miller wrote:
>
> > I think the proble
Smiley
wrote:
Thanks. I could see thread priority customization being used well in
combination with rate limiting so as to mitigate a starvation risk.
>
Yeah, I met Brian Goetz and have his excellent book.
>
~ David
>
>
On Sat, Jul 22, 2023 at 3:20 AM Mark Miller wrote:
>
It’s a hint for the OS, so results can vary by platform. Not the end of the
world but not ideal.
A scarier fact is that Brian Goetz, pretty big name in Java concurrency,
recommends against in general, noting that it can lead to liveness /
starvation issues.
Yup, though of course the return can't simply be added to that method, but
sendError won't stop the request, it will just cause an error when there is
an attempt to write to the response later.
Oh one more good for duplicating that type of fail - run it in docker, or a
VM, or maybe Multipass, and give it anemic resources (though enough that
the test doesn't OOM or something)
On Thu, Jun 15, 2023 at 5:34 PM Mark Miller wrote:
> Why don't you see how it can return null?
&g
Why don't you see how it can return null?
I'm looking at an older checkout, but I see JettySolrRunner checking for
null core containers all over, and I see it passing back null explicitly in
at least one case.
When I peek at where that core container might be coming from, I see a
provider and a f
If you are looking for performance, you probably want to do some tests to
verify you will get it.
Most of the binary protocols seem to avoid compression beyond what Solrs
JavaBin does, which is very simple numerical compression, with the idea the
cost should be small enough to maintain a performan
Determining the leader is extremely cheap in the general case. It’s when
you have to exchange data (generally when that exchange involves
replication) that’s expensive. Or when you spin up 500 threads for 500
cheap operations. For the common use case, a very basic and long needed
feature in that re
I don’t have much to say about the proposal, other than to say that if an
election ever ends up involving syncing up and exchanging data, doing that
just in time is probably less than ideal for most of the more common uses
cases.
That’s just an aside though. Id be more interested in seeing the pro
due to
the change in approach, and mostly just playing with some fun ideas around
keeping the JVMs busy for the duration of the run.
[Mark Miller - Chat @
Spike](https://spikenow.com/r/a/?ref=spike-organic-signature&_ts=1rgzgp)
[1rgzgp]
On October 9, 2022 at 6:01 GMT, Shawn Heisey w
It’s hard to see how they are not related. You can look at both as trying
to solve largely the same problem - that the Overseer is mind boggling
inefficient. And it’s hard to see how pursuing one makes sense with the
other. If you were to eliminate the Overseer and essentially distribute
most of it
Unfortunately, towards the end, I had to move to less public work. So the
current public code is a few months short. That puts it in a tough place
someone else to jump into, beyond maybe looking at some isolated things. I
had a thought that maybe I would eventually do something with the final
produ
There was too much diversion to switch to it in the end. Just a sample of
changes: I rewrote the entire Overseer and collections API implementation,
rewrote ZkStateReader, rewrote all the Zookeeper management with a
recursive snide watcher strategy, made all the primary paths async with
async IO, m
Just going to throw this out there, but I don’t think you have to know much
of anything about the likely quality and impact of changes from Lucene 9.1
to say that raising the king dependency for a software project by a full
dot release is def not “respin and another quick round of smokes material”
Yeah, there two reasons we didn’t push embedded Zookeeper out of the gate
and even went so far as to call it a non production “demo” feature.
Dynamic reconfiguration as a cluster changed over time, and a Zookeeper
instance per Solr node being prohibitive. At least the latter was
theoretical externa
“Your test fails or not” *
I did see some time back, that thread leaks did need some attention. I
don’t know if it’s gotten any worse, but I did have some offenders
addressed.
Really though, the whole idea of removing the reliance on the test
framework to sweep leaks and poor test behavior under
The true nature and state of those tests lie far deeper than pretty much
anyone occasionally scratches with their trowel. To really take a peak, you
have to do at minimum, something like setup a Jenkins farm with half a
dozen, a dozen machines with varying low to high need specs, randomize
parallel
was
not using the metrics or using the more scalable and logical metrics api.
[Mark Miller - Chat @
Spike](https://spikenow.com/r/a/?ref=spike-organic-signature&_ts=1dg8vz)
[1dg8vz]
On January 9, 2022 at 22:46 GMT, David Smiley wrote:
I noticed Solr auto-creates a metrics SolrJmxReporte
Can’t restrain myself.
The discussion whether we have "enough features" is i.m.o. silly
Lucene used to release major versions with something like no features and
just deprecation work.
It’s historically common, but the silly smell reminds me of the
stereotypical kid selling his parents on some
.
For larger heaps, if memory is not a constraint, the new collectors win.
If memory is a constraint, you pay for the better latency with throughput with
the new collectors.
G1 remains a good default generally.
[Mark Miller - Chat @
Spike](https://spikenow.com/r/a/?ref=spike-organic-signature
Solr 4 had both an Alpha and Beta release. Came with essentially full release
cost, just indicated broad confidence in the initial releases and that users
should give it a spin if possible to allow a more reasonable .0 release.
[Mark Miller - Chat @
Spike](https://spikenow.com/r/a/?ref=spike
Yeah that’s a pretty crappy situation for a new contributor.
You basically have to make some educated guesses. Do more tests fail on
average after your patch than before? Are the fails in tests that you added
or in tests that look related (test with backup in the name for example)?
If the fails d
Two cents from the peanut gallery:
I’ve looked at this before. My opinion:
Our stuff was a just terrible, take your pick on the api version. Reasons
are numerous.
Custom end points is an anti feature. Even worse for cloud.
JAX-RS looked ridiculously sensible.
--
- MRM
the damn thing look silly
and restrictive and broken next to his sensible glory now.
[Mark Miller - Chat @
Spike](https://spikenow.com/r/a/?ref=spike-organic-signature&_ts=1bnzlq)
[1bnzlq]
On December 1, 2021 at 19:50 GMT, Houston Putman
wrote:
This doesn't really address my concer
I think the other thing is that many devs like to understand what they are
doing and why at a high level rather than reach into the mud much and feel
around.
You will find a lot of devs that will spend a tremendous amount of time
working to solve problems with what they have learned twiddling gc
p
, 2021 at 10:23 AM Mark Miller wrote:
> Yes, as far as I recall, it does not do what it says. The doc and volation
> wording would suggest that it is checking if you make unnessary watchers
> because one already exists at that time for a particular znode. You have
> more that one watche
Yes, as far as I recall, it does not do what it says. The doc and volation
wording would suggest that it is checking if you make unnessary watchers
because one already exists at that time for a particular znode. You have
more that one watcher watching a znode in parallel at the same time.
This wou
now if it's a realistic evolution of SolrCloud or should be
> considered science fiction at this stage.
>
> Ilan
>
> On Tue, Oct 5, 2021 at 7:33 AM Mark Miller wrote:
>
>> Well, the replicas are still waiting for the leader, so not no wait, you
>> just don’t have
.
Mark
On Tue, Oct 5, 2021 at 12:10 AM Mark Miller wrote:
>
>
> On Mon, Oct 4, 2021 at 5:24 AM Ilan Ginzburg wrote:
>
>> Thanks Mark for your write ups! This is an area of SolrCloud I'm
>> currently actively exploring at work (might publish my notes as well at
>&g
is a recipe zk promotes to avoid a thundering
herd affect - you can have tons of participants and it’s an efficient flow
vs 100 participants fighting to see who creates a zk node every new
election.
But generally we have 3 replicas. Some outlier users might use more, but
even still it’s not going
Okay, I added some basic suggestions to that leader election Jira.
Between everything I’ve dropped in this thread, I don’t see why anyone
could not fix leader election and leader sync up or come up with good
replacements or make good improvements, so I’ll just leave it at that.
Finally, if anyone
Ilan:
So I have never had any disagreements with your analysis of what does not
work. I have never had any competing designs or approaches. I am not
against different designs.
When I assert this design works and scales, it's mainly to point out that
design is never the problem I've seen here. I'v
I filed
https://issues.apache.org/jira/browse/SOLR-15672 Leader Election is flawed
- for future reference if anyone looks at tackling leader election issues.
I’ll drop a couple notes and random suggestions there
Mark
On Sat, Oct 2, 2021 at 12:47 PM Mark Miller wrote:
> At some point digg
At some point digging through some of this stuff, I often start to think, I
wonder how good our tests are at catching certain categories of problems.
Various groups of code branches and behaviors.
I do notice that as I get the test flying, they do start to pick up a lot
more issues. A lot more bug
some way to make these insights more durable / findable. Could be a link
> from the code into maybe a wiki page or something.
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Fri, Oct 1, 2021 at 11:02 PM Mark Miller wrote:
gauntlet of uninvolved, good intentioned
developers, neither me nor anyone else would be pleased.
Mark
On Fri, Oct 1, 2021 at 2:17 PM Mark Miller wrote:
> That covers a lot of current silliness you will see, pretty simply as most
> of it comes down remove silly stuff, but you can fin
100's of k replicas and collections is doable even
on single machines and a handful of Solr instances, say nothing about
pulling in more hardware. Everything required is cheap cheap cheap.
It's the mountain of unrequired that is expensive expensive expensive.
On Fri, Oct 1, 2021 at 12:47 PM
Ignoring lots of polling, inefficiencies, early defensive raw sleeps,
various races and bugs and a laundry list of items involved in making
leader processes good enough to enter a collection creation contest, here
is a more practical small set of notes off the top of my head on a quick
inspection a
in on that.
I think the implementation ends up being way more important and ends up
with far fewer resources, I’d sign up for some contribution there. Impl
while float any design but the silly or unworkable very nicely if given the
fuel.
Mark
> Ilan
>
> On Thu 30 Sep 2021 at 01:02, Ma
algorithms - documented. Maintained and pushed forward by a
separate group dedicated to the task.
But I can tell you, it’s by no means some kind of Rubik’s cube, but it is
no small lift.
Mark
On Wed, Sep 29, 2021 at 9:13 AM Mark Miller wrote:
> I very much agree. That code is the root of a v
ome-grown ZK code,
> there are maybe 2 people on the Solr team who understand what’s going on
> there (and I’m certainly not one of them!). And the maintenance cost is
> just too high over time.
>
> —
>
> Andrzej Białecki
>
> On 28 Sep 2021, at 21:31, Mark Miller wrot
P.S. this is not actually the zookeeper design I would submit to any
competition :)
I’ve gone different routes in addressing the zookeeper short fall. This one
is relatively easy, impactful and isolated for the right developer.
Personally, with fewer scale and isolation limits, by the far the bes
ence of actions that achieve some goal such as electing
> a leader among participants or grabbing a lock) got interrupted and must be
> completely restarted using a new session.
>
> On Tue, Sep 28, 2021 at 1:03 PM Mark Miller wrote:
>
>> That’s why I say that ideally you should ac
ples in mind of where this is problematic in
> existing code (or it would already be a bug) but the existing single call
> level retry approach feels fragile.
>
> Ilan
>
> On Mon 27 Sep 2021 at 19:04, Mark Miller wrote:
>
>> There are a variety of ways you could do it.
- and
instead you can survive the bombard without any updates are disabled, zk is
not connected fails. Unless your zk cluster is actually catastrophically
down.
Mark
On Sun, Sep 26, 2021 at 7:54 AM David Smiley wrote:
>
> On Wed, Sep 22, 2021 at 9:06 PM Mark Miller wrote:
> ...
>
>
> I would hope there are few developers doing cloud work that don’t
understand the lazy local cluster state - it’s entirely fundamental to
everything.
The busy waiting, I would less surprised if someone didn’t understand, but
as far as I’m concerned they are bugs too. It’s an event driven system
change made is immediately visible on a node,
> no matter the ZK config. That's why code often busy-waits for the update to
> become visible before continuing (common pattern in the Collection API
> commands).
>
> Ilan
>
> On Mon, Sep 27, 2021 at 8:13 AM Mark Miller wrote:
>
in the fix for it.
Mark
On Sun, Sep 26, 2021 at 9:01 PM Mark Miller wrote:
> I should also mention, I promise this test can be 100% reliable. It’s not
> code I’m going to ramp up on soon though. Also, as I said I may have a
> different test experience than others. What tests run togethe
ple of test names on your mind?
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Tue, Sep 21, 2021 at 6:59 PM Mark Miller wrote:
>
>> I can’t handle some of these test outliers anymore - the ones that run
>&
to have to
rerun the tests the most.
Mark
Mark
On Sun, Sep 26, 2021 at 8:55 PM Mark Miller wrote:
> I believe all tests still run with a 1 zk cluster, if still the case, zk
> consistency shouldn’t matter.
>
> It’s been a long while since I’ve looked into that particular doc/issue
is non-trivial. I think #1 is
>>> the real problem and #2 is a bandaid that shouldn't be needed.
>>>
>>> I think I recall mark previously ranting about how insane and terrible
>>> it would be if an RDBMS did this with CREATE TABLE...
>>>
>>&
I’m checking that I’m not in some old branch somehow … I’d have sweared
someone got rid of ZkCmdExecutor.
I can’t touch this overseer, I’m dying to see it go, so forgetting about
the fact that it’s insane that it goes to zk like this to deal with
leadership or that it’s half impervious to interrup
Perhaps I just have a unique test running experience, but this test has
been an outlier failure test in my test runs for months. Given that it’s
newer than most tests, I imagine it’s attention grabbing days are on a
downslope, so here is a poke if someone wants to check out why it often
can’t find
I can’t handle some of these test outliers anymore - the ones that run for
1-3 minutes with no added value and are easy to address. So I’m going to
address the ones that are annoying me most.
Please, after this, when changing or adding tests, if 95-99% of the non
nightly run comes in under a minut
Looks like 1 is in. I’m looking at some before and after perfasm
results today, but I have not seen anything concerning yet.
On Thu, Sep 9, 2021 at 4:51 PM Mark Miller wrote:
> I’m not quite finished but mostly through a review of SOLR-1 - it’s
> looking good.
>
> MRM
>
I’m not quite finished but mostly through a review of SOLR-1 - it’s
looking good.
MRM
On Wed, Sep 8, 2021 at 5:45 PM Mark Miller wrote:
> Hey has an alternate wip branch that is a bit more up to date, but still 9
> different fails.
>
> He also has a benchmark for the benc
;
> On Wed, Sep 8, 2021 at 11:12 AM Mark Miller wrote:
> >
> > I see there is alternate more recent SOLR-1-wip branch i missed.
> Taking a look at that.
> >
> > MRM
> >
> > On Wed, Sep 8, 2021 at 2:34 AM Mark Miller
> wrote:
> >>
> >&
I see there is alternate more recent SOLR-1-wip branch i missed. Taking
a look at that.
MRM
On Wed, Sep 8, 2021 at 2:34 AM Mark Miller wrote:
> SOLR-1 is critical query path and appears able to currently fail up to
> 150 tests per run due to what looks mostly to be stats/metrics
SOLR-1 is critical query path and appears able to currently fail up to
150 tests per run due to what looks mostly to be stats/metrics races,
though it’s hard to be sure that’s all with all the noise. Do you have an
update you can push Mike?
MRM
On Tue, Sep 7, 2021 at 12:35 PM David Smiley w
I put together a shiv for the benchmark module that will allow for basic
regression testing against 8.x and the 8 to 9 release. Given how incredibly
tough benchmarks are to get right vs setting something ad-hoc up, there is
good value in being able direct spent effort backwards. At this point, it
w
. So one side
limps, the other side drowns in private and independent investment,
duplicated all over for specific use cases. One group is going to be
unhappy if one group is going to end up with something that properly moves
forward.
MRM
On Mon, Sep 6, 2021 at 12:06 AM Mark Miller wrote
Embedded ZooKeeper was the plan for SolrCloud from day one. It didn’t
happen because we were waiting for dynamic zk cluster membership reconfig -
and because we ate it on ZooKeeper for 10 years, which is not very
conducive to pulling it in tight under the covers.
At this point it’s built up a gre
to derail the parent conversation),
> just my two cents on attempting that kind of "a small amount of
> effort".
>
> Regards,
>Alex.
>
> On Thu, 19 Aug 2021 at 02:05, Mark Miller wrote:
> >
> > The gap is not ideal and I’m not advocating for i
gt;> CoreContainer gets his own ServletContextHandler and can therefore easily
>> be loaded unloaded from Jetty.
>>
>> I wrote many other apps in my daily live like this, the startup time
>> (especially for microservices) is great. A Jetty starting up in
>> milliseco
eb.xml for JettySolrRunner is a
>>>>> tangent. Either way we are still in a container and I think I hear some
>>>>> agreement that something should be done about the dispatch here, and both
>>>>> of you seem to agree that an actual servlet w
The downside to respecting web.xml and making JettySolrServer serve a
webapp is that loading a webapp is very expensive and slow in comparison.
JettySolrServer actually starts up extremely quickly. It’s almost more
appealing to change the Server to use the JettySolrServer strategy. It’s so
slow to
I’ve discovered through sheer lack of
sleep or distraction with little of the regular costs or individual(s)
efforts over time.
MRM
On Wed, Aug 11, 2021 at 4:37 AM Mark Miller wrote:
> Yeah, a Solr interpreter is a bit more of a lift, this interpreter just
> handles firing off paramet
t ultimately never
> committed it because of some lukewarm feedback from David S on the PR
> and some shifting personal priorities. If others are using Zeppelin
> maybe the idea is worth revisiting though...
>
> Jason
>
> On Wed, Aug 4, 2021 at 10:42 PM Mark Miller wrote:
>
1 - 100 of 152 matches
Mail list logo