Thank you for moving this topic forward, Alex.

I agree that criteria are required.

> What do you think would be reasonable criteria to establish?

Here is a quick draft. Opinions are welcome:

(Creation)
- A new feature module should default to contrib/experimental (not
limited to new contributors).
- If a developer wants to place the module elsewhere from the
beginning, they should start a thread on the dev mailing list (dev
ML). If there are no objections, it is accepted. If there are
objections, a vote will be held (Simple Majority vote:
https://www.apache.org/foundation/glossary.html#SimpleMajority).

(Promotion)
- If anyone wants to promote the module to a different location, they
should start a thread on the dev ML. If there are no objections, it is
accepted. If there are objections, a vote will be held (Simple
Majority vote).

* These criteria do not apply to the creation of modules that are not
new features (e.g., refactoring modules).

Regards,
Toshiya

On Fri, Feb 27, 2026 at 2:18 AM Alex Porcelli <[email protected]> wrote:
>
> Subhanshu,
>
> Thank you for the contribution, it's great idea. I'll take some time to
> review soon.
>
> Now I'm sorry to hijack the thread….
>
> Toshiya-san,
>
> The idea to have contrib/experimental makes total sense, but it misses
> criteria. Lots of features have been added by new contributors without such
> experimental or contrib labels.
>
> I'm absolutely in favor of exploring this idea, but we need to define a
> fair criteria for when a thing is flagged as contrib/experimental and
> what's the criteria to promote it out of that status.
>
> Otherwise we risk creating inconsistency in how we treat contributions,
> especially since we've accepted feature additions at the top level in the
> recent past.
>
> What do you think would be reasonable criteria to establish?
>
> -
> Alex
>
> On Wed, Feb 25, 2026 at 11:15 PM, Toshiya Kobayashi <
> [email protected]> wrote:
>
> > Hi Subhanshu,
> >
> > Thank you for the contribution!
> >
> > The module looks good and will add value to the use cases you mentioned.
> >
> > My concern is that `drools-opentelemetry` looks like a feature built on
> > top of the Drools engine, so it may not be appropriate to place it at the
> > top level of the `drools` repository. Maybe we could have an umbrella
> > directory like `drools-contrib` or `drools-experimental`. Does anyone have
> > thoughts?
> >
> > Toshiya
> >
> > On Wed, Feb 25, 2026 at 8:42 PM Francisco Javier Tirado Sarti
> > <[email protected]> wrote:
> >
> > Hi,
> > Nice addition.
> > Although not completely related, I think it is worth mentioning that there
> > is already some OpenTelemetry usage in the kogito-runtimes repository. Here
> > <https://github.com/apache/incubator-kie-kogito-runtimes/tree/main/
> > quarkus/addons/opentelemetry> node
> > listeners are used to trace the progress of the Workflow. I do not think
> > it will conflict with what you are planning to do (If I understood
> > correctly, you are adding OpenTelemetry only to the rule engine) Also,
> > since Quarkus's POM handles OpenTelemetry dependencies, we are good on that
> > too.
> >
> > On Wed, Feb 25, 2026 at 12:37 AM Subhanshu Bansal < subhanshu.bansal5566@
> > gmail.com> wrote:
> >
> > Hi Everyone,
> >
> > I have opened a PR to implement support for OpenTelemetry for Drools.
> >
> > PR: https://github.com/apache/incubator-kie-drools/pull/6595
> >
> > This is a separate module and why I used Open Telemetry specifically is
> > because it gives you production visibility into how your Drools rules are
> > actually behaving in a running system. Here’s what that means concretely
> > for this project:Distributed TracingWhen rules execute inside a
> > microservice, the TracingAgendaEventListener creates spans that connect to
> > your existing traces. If a REST request hits your service, triggers rule
> > evaluation, and then calls a downstream service, you get one unified trace
> > showing exactly which rules fired, how long each took, and what facts were
> > involved. Without this, Drools is a black box inside your trace — you see
> > the request enter and leave, but nothing about the decision logic in
> > between.Metrics for Production MonitoringThe MetricsAgendaEventListener
> > exposes counters and histograms that flow into whatever backend you already
> > use (Prometheus, Datadog, Grafana, etc.). This lets you:
> >
> > - Alert when a rule suddenly fires 10x more than usual (possible data
> > issue or regression)
> > - Dashboard rule firing latency to catch performance degradations before
> > users notice
> > - Compare rule firing patterns across deployments (did the new rule
> > version change behavior?)
> >
> > Why OpenTelemetry specifically (vs. the existing Micrometer in
> > drools-metric)The existing drools-metric module focuses on internal
> > node-level constraint evaluation metrics via Micrometer. OpenTelemetry adds
> > a different dimension with metrics and tracing.
> >
> > The key differentiator is correlation. When a customer complaint comes in,
> > an SRE can search traces by the request ID, see exactly which rules fired
> > during that request, how long each took, and what facts were inserted — all
> > without adding any logging or redeploying. That’s the operational benefit
> > that the existing drools-metric module doesn’t address.
> >
> > I am open to all the suggestions. Feel free to reach out.
> >
> > Thanks!
> >
> > --------------------------------------------------------------------- To
> > unsubscribe, e-mail: [email protected] For additional
> > commands, e-mail: [email protected]
> >

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to