+1 for restricting the file extensions and looking at what the user has
defined.

On Tue, Feb 28, 2017 at 4:46 PM Jinmei Liao <jil...@pivotal.io> wrote:

> Yeah, before we were simply looking at System.getProperty("user.dir") to
> get all the logs/stats, now I think we should use whatever user defined in
> the config file and do a getParent to get the parent directory to search
> for the files.
>
> On Tue, Feb 28, 2017 at 4:34 PM, Dan Smith <dsm...@pivotal.io> wrote:
>
> > I think maybe I didn't quite understand the original proposal. Are you
> > saying you won't even look at the directory or filename the user
> specifies,
> > but just grab all the files that happen to be in the working directory
> and
> > end in .log? I don't think that's going to do the right thing for most
> > users.
> >
> > Very commonly users do direct their logs to separate directory, at least
> a
> > subdirectory of their working directory. And they often put all of their
> > logs into that directory - application logs, gemfire logs, etc. So I
> think
> > you really do need to look at the directory and filename they are
> > specifying.
> >
> > Now, if you are just saying that their filename has to end in .log, that
> > seems like a reasonable restriction. It seems like maybe we already have
> > that restriction if you're hitting an IndexOutOfBoundError.
> >
> > -Dan
> >
> > On Tue, Feb 28, 2017 at 4:20 PM, Jinmei Liao <jil...@pivotal.io> wrote:
> >
> > > Darrel, it seems if user defines a log file to be simply "serverLog" (a
> > > filename with no "."), when rolling over the log file once file size is
> > > reached, we get a IndexOutOfBoundError when it's trying to figure out
> > > what's the rolled-over filename should be.
> > >
> > > And if user defines a log file name to be "server.log.gz", when rolling
> > > over, the new filename seems to be "server.log_01_01.gz". Does not look
> > > like we handle the ".gz' suffix correctly. For now, we do want to
> enforce
> > > the log/stats filename ends with appropriate suffix.
> > >
> > > On Tue, Feb 28, 2017 at 3:16 PM, Darrel Schneider <
> dschnei...@pivotal.io
> > >
> > > wrote:
> > >
> > > > It sounds like you will pick up any file in the working directory
> that
> > > ends
> > > > with ".log" or ".log.gz".
> > > > But what if the geode server is sharing a directory with something
> else
> > > > that is also writing files with these extensions?
> > > > Or if multiple geode servers are running in the same directory?
> > > > I think it would be better to use the configured log file name and
> stat
> > > > archive name and use that to find the logs and stats to gather. The
> > > rolling
> > > > code has already been written that will find all the existing logs
> and
> > > > stats. In any bugs in that code need to be fixed since it would break
> > the
> > > > code that removes old files based on disk space. So it seems like you
> > > > should be able to use this same code to get a list of the files to
> > copy.
> > > >
> > > >
> > > > On Tue, Feb 28, 2017 at 2:57 PM, Dan Smith <dsm...@pivotal.io>
> wrote:
> > > >
> > > > > I'm a bit confused by (1). Isn't it actually more complicated for
> you
> > > to
> > > > > restrict log collection to a relative path? Why not just look for
> log
> > > > files
> > > > > no matter where they are written to? I also don't really follow the
> > > > > argument about why a user that writes to /var/logs is not going to
> > want
> > > > to
> > > > > use this command. Won't all users want to be able to gather their
> > logs
> > > > > using this command?
> > > > >
> > > > > 2 seems reasonable. It seems like we should restrict the file names
> > if
> > > we
> > > > > are going to have this limitation.
> > > > >
> > > > > -Dan
> > > > >
> > > > > On Tue, Feb 28, 2017 at 2:43 PM, Jinmei Liao <jil...@pivotal.io>
> > > wrote:
> > > > >
> > > > > > Hello community,
> > > > > >
> > > > > > We are currently trying to improve what "export logs" should do.
> > > > > Currently
> > > > > > export logs only export the logs(filtered by logLevel and start
> and
> > > end
> > > > > > date) to each individual member's file system. We want to make
> all
> > > the
> > > > > > member's logs exported to a central location  and if you are
> > > connecting
> > > > > > using http, it will be exported to your local file system. This
> is
> > to
> > > > > > facilitate gathering logs in the cloud environment.
> > > > > >
> > > > > > That said, for the first round of implementation, we would like
> to
> > > > impose
> > > > > > these restrictions to this command:
> > > > > > 1) it will only look for the logs/stats in each members working
> > > > directory
> > > > > > only.
> > > > > > 2) it will only look for files that ends with .log, .log.gz, .gfs
> > or
> > > > > > .gfs.gz.
> > > > > >
> > > > > > Background for 1): if you started your locator/server with
> > "log-file"
> > > > or
> > > > > > "statistics-archive-file" with an absolute path, it will write
> > these
> > > > > files
> > > > > > to that location, but if you simply give it a relative path, the
> > > files
> > > > > will
> > > > > > be written to the member's working directory. The reasoning
> behind
> > 1)
> > > > is
> > > > > > that this command is mostly for those environment that you can't
> > > easily
> > > > > go
> > > > > > to the member's filesystem to get logs, but if you have started
> > your
> > > > > > server/locator with an absolute path like "/var/logs", we are
> > > assuming
> > > > > you
> > > > > > already know how to get the logs, thus this command to not mean
> > much
> > > to
> > > > > > you.
> > > > > >
> > > > > > For restriction 2), since logs and stats files roll over, it is
> > much
> > > > > easier
> > > > > > to find the target files with extensions rather than file name
> > > > patterns.
> > > > > We
> > > > > > could either do not allow you to start server/locator with other
> > file
> > > > > name
> > > > > > suffix or post a warning. We would need the community's input on
> > > this.
> > > > > >
> > > > > > Any feedback is appreciated.
> > > > > >
> > > > > > --
> > > > > > Cheers
> > > > > >
> > > > > > Jinmei
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Cheers
> > >
> > > Jinmei
> > >
> >
>
>
>
> --
> Cheers
>
> Jinmei
>

Reply via email to