The code that gets the list of old logs/stats to remove
is: 
org.apache.geode.internal.io.MainWithChildrenRollingFileHandler.checkDiskSpace(String,
File, long, File, LogWriterI18n)
You will see that it calls findChildrenExcept with a pattern.
You probably want to instead call findChildren and you will see it has a
couple of callers that pass it a pattern.
On Tue, Feb 28, 2017 at 3:25 PM, Jinmei Liao <jil...@pivotal.io> wrote:

> Darrel, great, can you point me to the code to get a list of the files to
> copy?
>
> On Tue, Feb 28, 2017 at 3:16 PM, Darrel Schneider <dschnei...@pivotal.io>
> wrote:
>
> > It sounds like you will pick up any file in the working directory that
> ends
> > with ".log" or ".log.gz".
> > But what if the geode server is sharing a directory with something else
> > that is also writing files with these extensions?
> > Or if multiple geode servers are running in the same directory?
> > I think it would be better to use the configured log file name and stat
> > archive name and use that to find the logs and stats to gather. The
> rolling
> > code has already been written that will find all the existing logs and
> > stats. In any bugs in that code need to be fixed since it would break the
> > code that removes old files based on disk space. So it seems like you
> > should be able to use this same code to get a list of the files to copy.
> >
> >
> > On Tue, Feb 28, 2017 at 2:57 PM, Dan Smith <dsm...@pivotal.io> wrote:
> >
> > > I'm a bit confused by (1). Isn't it actually more complicated for you
> to
> > > restrict log collection to a relative path? Why not just look for log
> > files
> > > no matter where they are written to? I also don't really follow the
> > > argument about why a user that writes to /var/logs is not going to want
> > to
> > > use this command. Won't all users want to be able to gather their logs
> > > using this command?
> > >
> > > 2 seems reasonable. It seems like we should restrict the file names if
> we
> > > are going to have this limitation.
> > >
> > > -Dan
> > >
> > > On Tue, Feb 28, 2017 at 2:43 PM, Jinmei Liao <jil...@pivotal.io>
> wrote:
> > >
> > > > Hello community,
> > > >
> > > > We are currently trying to improve what "export logs" should do.
> > > Currently
> > > > export logs only export the logs(filtered by logLevel and start and
> end
> > > > date) to each individual member's file system. We want to make all
> the
> > > > member's logs exported to a central location  and if you are
> connecting
> > > > using http, it will be exported to your local file system. This is to
> > > > facilitate gathering logs in the cloud environment.
> > > >
> > > > That said, for the first round of implementation, we would like to
> > impose
> > > > these restrictions to this command:
> > > > 1) it will only look for the logs/stats in each members working
> > directory
> > > > only.
> > > > 2) it will only look for files that ends with .log, .log.gz, .gfs or
> > > > .gfs.gz.
> > > >
> > > > Background for 1): if you started your locator/server with "log-file"
> > or
> > > > "statistics-archive-file" with an absolute path, it will write these
> > > files
> > > > to that location, but if you simply give it a relative path, the
> files
> > > will
> > > > be written to the member's working directory. The reasoning behind 1)
> > is
> > > > that this command is mostly for those environment that you can't
> easily
> > > go
> > > > to the member's filesystem to get logs, but if you have started your
> > > > server/locator with an absolute path like "/var/logs", we are
> assuming
> > > you
> > > > already know how to get the logs, thus this command to not mean much
> to
> > > > you.
> > > >
> > > > For restriction 2), since logs and stats files roll over, it is much
> > > easier
> > > > to find the target files with extensions rather than file name
> > patterns.
> > > We
> > > > could either do not allow you to start server/locator with other file
> > > name
> > > > suffix or post a warning. We would need the community's input on
> this.
> > > >
> > > > Any feedback is appreciated.
> > > >
> > > > --
> > > > Cheers
> > > >
> > > > Jinmei
> > > >
> > >
> >
>
>
>
> --
> Cheers
>
> Jinmei
>

Reply via email to