Yes, it could make sense to replace PluginService stuff with things based on CFIPP. I developed the DI system somewhat independently from the plugin system because of the need to support components from the API, and the plugin system was reimplemented in terms of the DI system, so any simplifications there are perfectly valid (and sounds like the sort of thing I would have preferred). The DI system tries to present these concerns in a generic way such that it works well enough for our use cases without offering all the features one requires in an application-level DI framework (e.g., we have no reason to integrate with HTTP sessions or HTTP requests or other application-specific lifecycles, nor do we need a generic event bus, or localization, or all the other fun things Spring, Guice, et al, support).
Really, there are two main purposes for generating and consuming metadata around plugins: 1. We need a valid ClassLoader reference from the module containing the plugin so we can load it at runtime. Since we don’t make static references to plugin code, we need a way to link and load the plugins, and the old Java approach of taking a random ClassLoader and hoping for the best doesn’t work at all in JPMS. 2. We need at the bare minimum a list of plugin classes to load. Preferably, we should also have plugin metadata that we already index such as the plugin namespace (category) and name so that we don’t have to load the plugin class until a configuration references it. Now that’s the information needed to create plugins reflectively. In that case, yes, this could be done basically as a sort of service provider handle class (to get the class loader) along with a properties file or similar containing the other metadata. In fact, that’s close to what we did in 2.x anyways, but the service loader handle was needed for JPMS purposes, so generating code there seemed like a good place to combine things. At least a properties file could be concatenated. If an updated PluginService included code generated for registering the factory beans, that could help simplify the plugin registry side of things, though it would mainly involve the migration of logic from FactoryResolver implementations into the generated code. I’ve had ideas in the past that the current reflection-based DI system could also be implemented in terms of a code generation based system where things like lambdas could be generated to create safe ways to invoke plugin code dynamically so that we can avoid reflection entirely (and also avoid having to use method handles and var handles and such which have their own limitations in GraalVM). It does seem somewhat late to start a major change like that, but only if it needs to be included in 3.0.0. A more minimal change that involves the service provider handle classes and properties files would be small enough to be done in time, though. If we wanted to go the more “proper” route, then we’d need to figure out some sort of way to evolve that API post-3.0.0, especially if anyone is writing the metadata out by hand. > On Nov 29, 2024, at 06:01, Piotr P. Karwasz <pi...@mailing.copernik.eu> wrote: > > Hi Pavel, > > [NOTE: I am cross-posting this to `dev@logging`, since I believe it is more > appropriate there] > > On 29.11.2024 10:26, PavelTurk wrote: >> In the world of JPMS, there’s a service, we implement it, and we add it. I >> haven’t heard of a service + PROCESSOR that needs to be used. Can you point >> out any other projects that involve a service + processor? The reasoning is >> that we should only need to know the INTERFACE, and we’ll handle the >> IMPLEMENTATION ourselves. After all, it’s possible to do without it—for >> example, by writing a PluginService manually, which would scan the module >> itself and return the necessary classes or just data(entries) about them. >> That’s why I said that using a processor here seems odd to me. In general, I >> think code generation should only be used as a last resort—for example, for >> JPA metamodels—but that’s just my opinion. > > The processor doesn't need to be used, but it is quite useful, when you have > hundreds of plugins (which is the case for Log4j Core). Feel free to update > the Plugins documentation in `3.x`[1] to make that explicit. We do have a > draft PR to update that page, but it is a little bit stale[2]. > > Note that currently `PluginService` fulfills multiple functions, that could > be delegated to separate classes/files: > > 1. It provides a mapping between (namespace, pluginName) and a class name, so > we don't need to scan the classpath/modulepath. In this role it replaces the > binary `Log4j2Plugins.dat` file, with something that IMHO is even harder to > manipulate. We could probably replace it with a Java properties file or a > JSON file like the one in [3]. The metadata could be generated by an > annotation processor. > > 2. In a JPMS environment, it forces HotSpot to load all the available modules > containing Log4j Core Plugins. For this to work, `PluginService` could just > be an empty interface (easy to implement ;-) ). > > 3. In a non-JPMS environment, it simplifies the common practice of shading. > `Log4j2Plugins.dat` files are hard to merge and many shading tools do not > handle them. On the other hand most tools know how to merge > `META-INF/services` files. > > It is a little bit late in the lifecycle of 3.x to make drastic changes to > how `PluginService` works, but we could: > > * Either make `PluginService` an interface. Those that prefer to implement > the interface can do it themselves, but we will provide an annotation > processor that generates a JSON file and creates a minimal implementation > that reads the JSON file. > > * Or merge the functionality of `ConfigurableInstanceFactoryPostProcessor`[4] > and `PluginService`. Strongly simplifying, what the `PluginService` does is > registering a plugin builder `Supplier<T>` for each Log4j plugin `T` using a > specific namespace and name. CIFPP does the same thing, but for "global" > services like `MessageFactory`. > > The latter proposition needs to be confirmed by Matt, who wrote the > dependency injection system from scratch. Since the DI in Log4j Core is only > used in Log4j Core, personally I find that the less features it provides, the > better. > > [1] https://logging.apache.org/log4j/3.x/manual/plugins.html > > [2] https://github.com/apache/logging-log4j2/pull/2716 > > [3] > https://github.com/apache/logging-log4j-transform/blob/main/log4j-converter-plugin-descriptor/src/main/resources/Log4j2Plugins.schema.json > > [4] > https://logging.apache.org/log4j/3.x/javadoc/log4j-plugins/org/apache/logging/log4j/plugins/di/spi/ConfigurableInstanceFactoryPostProcessor.html > > [5] > https://logging.apache.org/log4j/3.x/javadoc/log4j-plugins/org/apache/logging/log4j/plugins/di/Key.html > >> >> I assume module-info was also generated automatically. The problem is that >> it’s not present in log4j-core-3.0.0-beta3-sources.jar see [1], but it does >> exist in log4j-core-3.0.0-beta3.jar see [2]. Additionally, it’s not in the >> repository see [3]. So, it is not possible to see what is in this file. Or >> was I looking in the wrong place? >> > You assume correctly. We use the JPMS Libraries plugins for BND[5] to > generate the module descriptor. There are several reasons behind this choice: > > * Last time I checked, JPMS support in IDEs was terrible, especially for unit > tests. Right now it seems better (in IntelliJ), but for a Maven project I > need to choose if `src/test` contains whitebox tests that will run on the > classpath or blackbox tests in a different package, that will run on the > module path. > > * It is hard to maintain a module descriptor manually, especially to keep > track of the `transitive` attribute or unused dependencies. BND computes that > perfectly. > > * We want to have OSGi descriptors perfectly aligned with the JPMS descriptor. > > Piotr > > [5] https://bnd.bndtools.org/chapters/330-jpms.html >