Got to chime in here and +1 on what Anthony's and Jake's sentiments are.

Let's be *very* careful and try to understand what users are trying to
achieve.

If we're providing a 'gfsh cp' option that then requires further
intervention to actually achieve what the user wants (i.e. server restart
or whatever is required to now use the new files) then we're nowhere better
off than we are now. 'gfsh cp' seems like a targeted solution.

I'd prefer to see us do a better job of classloading and have that be
clearly specified. Someone mentioned that Geode isn't a container but I
would argue that as soon as we can accept and run somebody else's code
we're a container and should provide facilities to dynamically manage the
lifecycle of that code.

--Jens

On Fri, Jan 6, 2017 at 8:31 AM, Udo Kohlmeyer <ukohlme...@pivotal.io> wrote:

> I think I can see the benefit of this feature.
>
> If you have Geode running in the cloud, it is easier to have a single
> management tool that can copy resource files to all the servers within the
> cluster.
>
> Although I would not see this a feature I'd promote, as it could really be
> abused, I believe it would work well for cloud environments.
>
> --Udo
>
>
>
> On 1/6/17 02:38, Swapnil Bawaskar wrote:
>
>> Some application may need to copy files to all the servers. These files
>> could either be data files or they could be configuration files needed by
>> the application or they could be jar files (that don't have functions but
>> have say, spring data geode jar files) that need to be on the server's
>> classpath.
>> We could accomplish this by enhancing the current gfsh "deploy" command to
>> accept any kind of file and write it to the servers file system OR create
>> a
>> new gfsh "copy" command to copy any arbitrary file to the servers.
>> I would personally like to repurpose the deploy command but would like to
>> hear the community's opinion.
>>
>> Thanks!
>>
>>
>

Reply via email to