I know I know, just brainfarting aloud... No water cooler ;-)
On Jul 10, 2017 11:50, "Ralph Goers" wrote:
> How is that any different than creating a new manager as your recovery
> logic? Remember, the reason for doing this is a) the managers live across
> reconfigurations while appenders don’t
How is that any different than creating a new manager as your recovery logic?
Remember, the reason for doing this is a) the managers live across
reconfigurations while appenders don’t and b) the appenders should be fairly
simple - the managers deal with all these kinds of complexities. For examp
Another idea, possibly whacky, is for an Appender to have two managers.
When one goes bad, you initialize the 2nd based on the same factory data,
then close the 1st one. The 2nd becomes current, rinse, repeat. Not sure
how this fits in w manager cacheing.
Gary
On Jun 28, 2017 13:37, "Matt Sicker"
This topic makes me think some sort of CircuitBreakerAppender may be useful
as an analogue to FailoverAppender. Instead of permanently failing over to
the backup appenders, this appender would eventually switch back to the
primary appender when it's safely back up. Supporting a full open/half
open/
Managers are not designed to be shutdown and restarted. If you are causing your
manager to come and go for the JMS support then I don’t think you implemented
it correctly. If you look at the tcp socket manager it has a reconnector inside
of it to handle retrying the connection. JMS should work t
Hi All,
I am thinking about how to best solve cases like
https://issues.apache.org/jira/browse/LOG4J2-1955
In a nutshell: Log4j starts but some external resource is not available
(like a JMS broker, a database server, and so on)
Right now, you're out of luck. You other appenders will keep on hap