Chainsaw is still alive; we just need to cut a release. ;)

On 2 May 2017 at 03:09, Mikael Ståldal <mikael.stal...@magine.com> wrote:

> I made some tests with Graylog a couple of days ago. I was able to get
> Log4j 2.8.2 to work with UDP socket and Kafka.
>
> After implementing LOG4J2-1854
> <https://issues.apache.org/jira/browse/LOG4J2-1854>, TCP socket also
> works.
> I have updated the documentation of GelfLayout to show how to use both UDP
> and TCP socket. (This is in Git master, but not yet released.)
>
> I have also tested Logstash and it can read the output of our JsonLayout.
> But this is not documented on our side.
>
> It would be nice with a separate manual page covering this topic of
> integrating Log4j with tools like Graylog, Logstash, Splunk, Flume, Lilith
> and Chainsaw (if still alive). With suggestions on how to setup and
> configure things on both sides.
>
> On Fri, Apr 28, 2017 at 6:54 PM, Matt Sicker <boa...@gmail.com> wrote:
>
> > Log4j has all the necessary plugins to support numerous scenarios, and
> I've
> > found three ways to support Graylog, for example:
> >
> > 1. GelfLayout + SocketAppender: send log messages straight to a Graylog
> > server. Simplest setup (no additional dependencies required), but not the
> > most reliable in theory.
> > 2. GelfLayout + KafkaAppender: send log messages to Kafka first, then
> have
> > a consumer on the other side ingest those into Graylog. This style is
> more
> > flexible since the layout doesn't necessarily need to be a GelfLayout,
> but
> > this is the most efficient way to handle that scenario. The cons,
> however,
> > is that a message in memory that hasn't been sent to Kafka yet can be
> lost
> > in a crash scenario, similar to the SocketAppender limitation.
> > 3. GelfLayout + FlumeAppender: send messages formatted for Graylog, but
> > persist locally first before passing off to another flume agent. I'm not
> > too familiar with this setup, but based on what Ralph has explained
> before,
> > this is probably the most reliable way to ensure logs are aggregated.
> >
> > Similar patterns can be followed for other services like Logstash,
> Splunk,
> > etc.
> >
> > Anyways, what I'm looking for here are suggestions on architecture here
> and
> > perhaps the desire to get a page written in the manual describing these
> > types of distributed system logging scenarios. If we can explain how to
> > follow these architectures, we can also find any inefficiencies in them
> to
> > improve on.
> >
> > --
> > Matt Sicker <boa...@gmail.com>
> >
>
>
>
> --
> [image: MagineTV]
>
> *Mikael Ståldal*
> Senior software developer
>
> *Magine TV*
> mikael.stal...@magine.com
> Grev Turegatan 3  | 114 46 Stockholm, Sweden  |   www.magine.com
>
> Privileged and/or Confidential Information may be contained in this
> message. If you are not the addressee indicated in this message
> (or responsible for delivery of the message to such a person), you may not
> copy or deliver this message to anyone. In such case,
> you should destroy this message and kindly notify the sender by reply
> email.
>



-- 
Matt Sicker <boa...@gmail.com>

Reply via email to