Re: Simple Java Client

2017-04-25 Thread Wes Williams
A couple of points to interact with John's points.

GFSH as API

We agree that GFSH is a DSL, and a really good and useful one. We agree
that we don't want our API tied to GFSH syntax. I view the valuable part of
GemFire admin as the Java code underneath GFSH, or the "Commands."

For example, to create a JAVA API to "Create Region",  why not create a
Java API around CreateAlterDestroyRegionCommands? In this way, GFSH and the
JAVA API share the same code. It seems too obvious yet I don't see it being
recommended.  Why not?  (Note: the hard-coded formatting would need to be
removed).

Once you have the Java/GFSH/REST API as common code, you then refactor it.
What's the objection to this approach? Once you open Java API's to do
everything that GFSH does, then you have now unshackled the power of the
developer (the JAVA API) and the admin (GFSH API).


REST API

I've found that most don't want to use the Dev REST API because it's
attached to a server rather than the cluster. HA?


*Wes Williams | Pivotal Advisory **Data Engineer*
781.606.0325
http://pivotal.io/big-data/pivotal-gemfire

On Tue, Apr 25, 2017 at 7:01 PM, Fred Krone  wrote:

> Good feedback.
>
> This would use the new protocol.  I should have mentioned that.
>
> The original driver for this was the Region API needs either an update or
> an alternative.  Updating has a few drawbacks: Region wasn't designed with
> open source in-mind and as Swap mentioned it is naturally tightly coupled.
> Members of the community are already working to update Region but something
> gets fixed and it breaks something for someone else.  I think it's much
> better to provide a new interface that implements the first part of JSR 107
> (javax.cache) and get the ball rolling for the community and, perhaps, over
> time deprecate Region (although that's not a primary objective).
>
> A Java driver will probably get built regardless just to give the new
> protocol some legs. That driver also needs a decent caching interface. JSR
> 107 has already solved that part.  So let's get started on it.  If the
> community wants to go the whole way and continue JSR 107 implementation
> after that that's awesome.  Functions can also be added, etc.
>
> I intentionally did not mention anything about PCF as this pertains to
> Geode itself as an open source offering and developer experience.  I'm
> writing as a member of the community. Ie: I'm a developer who would like to
> add some caching to my application -- I can download either Geode or
> Hazelcast for free and right now it's a no brainer.  Not that we wouldn't
> keep PCF in-mind but it's out of scope for this thread.  I do believe
> getting started on a Java driver for the protocol and a standardized
> caching API are easily leveraged wins across the board.
>
>
>
>
> On Tue, Apr 25, 2017 at 3:20 PM, Swapnil Bawaskar 
> wrote:
>
> > I had looked at the JCache in the past and here are some of the things I
> > noted:
> >
> > Naming convention: Geode's region is a Cache in JSR-107, and Geode's
> Cache
> > is a CacheManager. I think it would be too confusing to change the
> meaning
> > of cache. Also, how do you document this given that Cache means different
> > things if you are talking JCache vs Geode.
> >
> > The way to create a Cache in JSR-107 is:
> > Cache cache = manager.createCache(cacheName, Configuration
> > c);
> > Where it is upto the implementation to extend Configuration. Given this,
> > users will not be able to switch from an existing implementation to ours;
> > will have to write new code specially Configuration, making callbacks
> > serializable etc.
> >
> > JSR-107 will not be limited to the client. Server side callbacks like
> > CacheLoader, CacheListener etc. will need handle on jsr-107 “cache”.
> >
> > JSR-107 supports features like an EntryProcessor, which is a function
> > invoked atomically on an entry operation. We will have to make invasive
> > changes to Geode to support this.
> >
> > Given these, I don't think supporting JSR-107 is trivial.
> >
> > On Tue, Apr 25, 2017 at 2:55 PM Dan Smith  wrote:
> >
> > > What transport are you planning on using? REST, or the current binary
> > > protocol? Or is this just a wrapper around the existing java client
> APIs?
> > >
> > > If this about creating a new API, I agree with what John is saying that
> > we
> > > need reduce the number of APIs we have to do the same thing in java. It
> > > seems especially confusing if we end up with different APIs that
> support
> > > distinct features - like being able 

Re: Simple Java Client

2017-04-26 Thread Wes Williams
Now we're getting some precision. Let's talk about the "raw" Geode
proprietary bad ass API!  Would that "raw" Geode proprietary bad ass API  "raw"
Geode proprietary bad ass API that we're talking about be centered around
the commands found here:
https://github.com/apache/geode/tree/rel/v1.1.1/geode-core/src/main/java/org/apache/geode/management/internal/cli/commands

Or somewhere else?

*Wes Williams | Pivotal Advisory **Data Engineer*
781.606.0325
http://pivotal.io/big-data/pivotal-gemfire

On Tue, Apr 25, 2017 at 11:41 PM, Jacob Barrett  wrote:

> Java and its community have standards for all of these issue so why
> re-invent the wheel. The market doesn't want proprietary anymore, they want
> standards and mobility.
>
> Configuration of the server should happen through MBeans. You can wrap that
> in gfsh for command line, REST for remote web based admin, use JConsole or
> any other number of JMX based enterprise management tools. By using MBeans
> the server can easily expose new discovered services without the need to
> code specific gfsh commands, REST interfaces or APIs. There is no reason my
> SDG can't be retooled to "discover" the configuration from these MBeans as
> well rather than having to be touched every time we add or change
> something. There are tools and books already written that implementors can
> consult on MBeans. There isn't anything out there on gfsh commands.
>
> If we want to play in the Java community, especially J2EE (the other 50% of
> Java that isn't Spring), then we had better have a JSR-107 answer no matter
> what the pain is to provide it. I can pull dozens of books of the shelf
> that teach me how to effectively use a JCache, how many can I pull off the
> shelf that teach me Geode's API? How many engineers can I get applications
> form by saying "must have Geode API knowledge"? I can find people with
> JCache knowledge though. So from in implementor's perspective having
> standards is a must. Now I don't think the JSR-107 interface should be the
> root interface but rather a facade on the "raw" Geode proprietary bad ass
> API. That API should be 100% asynchronous (reactive, SEDA, whatever the
> current buzzword for asynchronous is today). Around that API we can provide
> facades for JSR 107, ConcurrentMap (our current yet not so well behaving
> API), List, Queue, etc. Maybe even JPA, JCA, etc. The thought of putting
> all those features into a single API makes my head exploded and they don't
> need to be like they are today.
>
>
>
> On Tue, Apr 25, 2017 at 8:25 PM Wes Williams  wrote:
>
> > A couple of points to interact with John's points.
> >
> > GFSH as API
> >
> > We agree that GFSH is a DSL, and a really good and useful one. We agree
> > that we don't want our API tied to GFSH syntax. I view the valuable part
> of
> > GemFire admin as the Java code underneath GFSH, or the "Commands."
> >
> > For example, to create a JAVA API to "Create Region",  why not create a
> > Java API around CreateAlterDestroyRegionCommands? In this way, GFSH and
> the
> > JAVA API share the same code. It seems too obvious yet I don't see it
> being
> > recommended.  Why not?  (Note: the hard-coded formatting would need to be
> > removed).
> >
> > Once you have the Java/GFSH/REST API as common code, you then refactor
> it.
> > What's the objection to this approach? Once you open Java API's to do
> > everything that GFSH does, then you have now unshackled the power of the
> > developer (the JAVA API) and the admin (GFSH API).
> >
> >
> > REST API
> >
> > I've found that most don't want to use the Dev REST API because it's
> > attached to a server rather than the cluster. HA?
> >
> >
> > *Wes Williams | Pivotal Advisory **Data Engineer*
> > 781.606.0325 <(781)%20606-0325>
> > http://pivotal.io/big-data/pivotal-gemfire
> >
> > On Tue, Apr 25, 2017 at 7:01 PM, Fred Krone  wrote:
> >
> > > Good feedback.
> > >
> > > This would use the new protocol.  I should have mentioned that.
> > >
> > > The original driver for this was the Region API needs either an update
> or
> > > an alternative.  Updating has a few drawbacks: Region wasn't designed
> > with
> > > open source in-mind and as Swap mentioned it is naturally tightly
> > coupled.
> > > Members of the community are already working to update Region but
> > something
> > > gets fixed and it breaks something for someone else.  I think it's much
> > > bet

Re: What to do with Singletons

2017-05-25 Thread Wes Williams
+1 to utility functions

*Wes Williams | Pivotal Advisory **Data Engineer*
781.606.0325
http://pivotal.io/big-data/pivotal-gemfire

On Wed, May 24, 2017 at 4:59 PM, John Blum  wrote:

> On a side but related note, it would be nice if Geode had the notion of
> useful, "canned" Functions provided OOTB.  Some of the *Gfsh* functions
> would be quite useful for applications in fact, or particularly for
> framework/tools to use as well.  Sometime ago I sent a list of Functions I
> thought would be nice to have.
>
> Food for thought.
>
> On Wed, May 24, 2017 at 1:41 PM, Kirk Lund  wrote:
>
> > Thanks for pointing out that DistributionManager is internal -- I forgot
> > about that. I'm primarily concerned with internal Functions, such as
> those
> > for GFSH commands, so maybe an internal version of FunctionContext which
> > exposes more would be good for those.
> >
> > On Wed, May 24, 2017 at 11:39 AM, Darrel Schneider <
> dschnei...@pivotal.io>
> > wrote:
> >
> > > FunctionContext is an external interface so it can not expose internal
> > > interfaces like DistributionManager.
> > > But it could expose Cache. DistributedSystem is external so you could
> > have
> > > it exposed from FunctionContext but it is already exposed from Cache.
> > > SecurityService is also internal.
> > > Are you thinking that for internal Functions you would cast
> > FunctionContext
> > > to an internal that would then expose these internal classes?
> > >
> > >
> > >
> > > On Thu, May 18, 2017 at 5:13 PM, Kirk Lund  wrote:
> > >
> > > > I've been digging through our code with close attention to the
> > > singletons.
> > > > I believe the majority of singletons in Geode exist for two main
> > reasons:
> > > >
> > > > 1) Insufficient context or lack of service lookup for Function,
> > > > DistributionMessage and (Client)Command implementations.
> > > >
> > > > 2) Poor OO design. This is where you see code in one class invoking
> > > > concrete methods on another class outside of its concerns. Many of
> > these
> > > > need to be teased apart and replaced with some sort of Observer that
> > > > isolates the reaction from the source of the originating event.
> > > >
> > > > Right now my focus is on #1 because that's the area that's currently
> an
> > > > obstacle for me.
> > > >
> > > > Function, DistributionMessage and (Client)Command classes need to
> have
> > > more
> > > > context provided to them (Cache, Security, etc) or they need a better
> > > > mechanism to look up these services. Currently these classes reach
> out
> > to
> > > > singletons in order to "get" what they need.
> > > >
> > > > *A) Function*
> > > >
> > > > The main entry-point which injects services into the Function is:
> > > >
> > > > public void execute(FunctionContext context);
> > > >
> > > > The FunctionContext needs to provide the service(s) that any Function
> > > might
> > > > require. This could include Cache, DistributionManager and maybe
> > > > SecurityService (anything else?).
> > > >
> > > > *B) (Peer-to-peer) DistributionMessage*
> > > >
> > > > The main entry-point which injects services into the
> > DistributionMessage
> > > > is:
> > > >
> > > > protected abstract void process(DistributionManager dm);
> > > >
> > > > We could provide multiple arguments or a single new
> DistributionContext
> > > > which then provides DistributionManager and Cache (anything else?).
> > > >
> > > > *C) (Client) Command*
> > > >
> > > > The main entry-point which injects services into the Command is:
> > > >
> > > > public void execute(Message msg, ServerConnection servConn);
> > > >
> > > > ServerConnection is huge but it does already expose Cache.
> BaseCommand
> > is
> > > > the only Command that implements "execute" and it defines a new
> > > entry-point
> > > > for injection:
> > > >
> > > > abstract public void cmdExecute(Message msg, ServerConnection
> servConn,
> > > > long start) throws IOException, ClassNotFoundException,
> > > > InterruptedException;
> > > >
> > > > We might want to clean that up and define a new CommandContext which
> > > > provides the Cache or anything else that the Command may need.
> > > >
> > > > Thoughts or additional ideas?
> > > >
> > >
> >
>
>
>
> --
> -John
> john.blum10101 (skype)
>


Re: GeodeRedisAdapter improvments/feedback

2017-02-17 Thread Wes Williams
I'm not clear on the reference to "I like the idea of first class data
structures like Lists and Sorted Sets."

Is the suggestion here to extend Geode to not only support a distributed
ConcurrentHashMap but also distributed ConcurrentList's and
ConcurrentSortedSet's?


*Wes Williams | Pivotal Advisory **Data Engineer*
781.606.0325
http://pivotal.io/big-data/pivotal-gemfire

On Thu, Feb 16, 2017 at 10:34 AM, Michael Stolz  wrote:

> I like the idea of first class data structures like Lists and Sorted Sets.
>
> I'm not sure which side of the fence I'm on in terms of very large objects
> and using Regions to represent them. Feels very heavy because of all the
> overhead of a Region entry in Geode (over 300 bytes per entry).
>
> I think the main reason people will want to use Geode in place of Redis
> will be horizontal scale in terms of the number of structures first, size
> of structures second, ability to get additional enterprise features like
> synchronous instead of asynchronous replication from masters to slaves
> (zero-data-loss) multi-site and even multi-cloud use cases (WAN Gateway).
>
>
>
> --
> Mike Stolz
> Principal Engineer, GemFire Product Manager
> Mobile: +1-631-835-4771
>
> On Wed, Feb 15, 2017 at 8:09 PM, Swapnil Bawaskar 
> wrote:
>
> > I think we as a community need to determine what value do we want to add
> > with the Redis adapter. Redis already does a great job storing small data
> > structures in memory and sharding them. We do a great job of making sure
> > that these data structures are horizontally scalable; why would we want
> to
> > replicate what another open source project is already implementing?
> >
> > Having said that, I would like to see a configuration property that lets
> > the user chose between a single server vs a distributed collection.
> >
> >
> > > I think we could have the following options:
> > >
> > >  1. have a property that could be set to use either single server
> > > collections over use the current distributed collection
> > >  2. have first class collection implementations that are distributed by
> > > nature, as using key:value as the hammer for all does not make
> sense
> > >
> >
> > I don't think these options are mutually exclusive. We should make lists
> > and SortedSets first class data structures in Geode alongside regions.
> >
>


Re: [DISCUSS] changes to Redis implementation

2017-02-27 Thread Wes Williams
>>Replicating a whole collection because of 1 change does not really make
too much sense.<<

I agree but won't delta replication prevent sending the entire collection
across the wire?

*Wes Williams | Pivotal Advisory **Data Engineer*
781.606.0325
http://pivotal.io/big-data/pivotal-gemfire

On Mon, Feb 27, 2017 at 10:08 AM, Udo Kohlmeyer 
wrote:

> I've quickly gone through the changes for the pull request.
>
> The most significant change of this pull request is that the collections
> that initially were regions are single collections (not distributed). That
> said, this is something that we've been discussing. The one thing that I
> wonder about is, what will the performance look like when the collections
> become really large? Replicating a whole collection because of 1 change
> does not really make too much sense.
>
> Maybe this implementation becomes the catalyst for future improvements.
>
> --Udo
>
>
>
> On 2/24/17 15:25, Bruce Schuchardt wrote:
>
>> Gregory Green has posted a pull request that warrants discussion. It
>> improves performance for Sets and Hashes by altering the storage format for
>> these collections.  As such it will not permit a rolling upgrade, though
>> the Redis adapter is labelled "experimental" so maybe that's okay.
>>
>> https://github.com/apache/geode/pull/404
>>
>> The PR also fixes GEODE-2469, inability to handle hash keys having colons.
>>
>> There was some discussion about altering the storage format that was
>> initiated by Hitesh.  Personally I think Gregory's changes are better than
>> the current implementation and we should accept them, though I haven't gone
>> through the code changes extensively.
>>
>>
>


Re: [GitHub] geode pull request #404: Geode 2469

2017-03-06 Thread Wes Williams
And correcting the spelling of "SEPERATOR" would be a plus while changing
the code.

*Wes Williams | Pivotal Advisory **Data Engineer*
781.606.0325
http://pivotal.io/big-data/pivotal-gemfire

On Mon, Mar 6, 2017 at 6:14 PM, galen-pivotal  wrote:

> Github user galen-pivotal commented on a diff in the pull request:
>
> https://github.com/apache/geode/pull/404#discussion_r104549703
>
> --- Diff: geode-core/src/main/java/org/apache/geode/redis/internal/
> executor/hash/HashInterpreter.java ---
> @@ -0,0 +1,126 @@
> +/*
> + * Licensed to the Apache Software Foundation (ASF) under one or more
> contributor license
> + * agreements. See the NOTICE file distributed with this work for
> additional information regarding
> + * copyright ownership. The ASF licenses this file to You under the
> Apache License, Version 2.0 (the
> + * "License"); you may not use this file except in compliance with
> the License. You may obtain a
> + * copy of the License at
> + *
> + * http://www.apache.org/licenses/LICENSE-2.0
> + *
> + * Unless required by applicable law or agreed to in writing,
> software distributed under the License
> + * is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
> CONDITIONS OF ANY KIND, either express
> + * or implied. See the License for the specific language governing
> permissions and limitations under
> + * the License.
> + */
> +package org.apache.geode.redis.internal.executor.hash;
> +
> +import java.util.Map;
> +
> +import org.apache.geode.cache.Region;
> +import org.apache.geode.redis.GeodeRedisServer;
> +import org.apache.geode.redis.internal.ByteArrayWrapper;
> +import org.apache.geode.redis.internal.Coder;
> +import org.apache.geode.redis.internal.ExecutionHandlerContext;
> +import org.apache.geode.redis.internal.RedisDataType;
> +
> +/**
> + * Utility class for interpreting and processing Redis Hash data
> structure
> + *
> + *
> + */
> +public class HashInterpreter {
> +
> +  /**
> +   * 
> +   * The region:key separator.
> +   *
> +   *  REGION_KEY_SEPERATOR = ":"
> +   *
> +   * See Hash section of  +  "https://redis.io/topics/data-types";>https://redis.io/
> topics/data-types#Hashes
> +   * 
> +   */
> +  public static final String REGION_KEY_SEPERATOR = ":";
> +
> +  /**
> +   * The default hash region name REGION_HASH_REGION = Coder.
> stringToByteArrayWrapper("ReDiS_HASH")
> +   */
> +  public static final ByteArrayWrapper REGION_HASH_REGION =
> +  Coder.stringToByteArrayWrapper(GeodeRedisServer.HASH_REGION);
> +
> +  /**
> +   * Return the region presenting the hash
> +   *
> +   * @param key the raw Redis command key that may contain the
> region:key formation
> +   * @param context the exception handler context
> +   * @return the region were the command's data will be processed
> +   */
> +  @SuppressWarnings("unchecked")
> +  public static Region ByteArrayWrapper>> getRegion(
> --- End diff --
>
> @ggreen yeah, I'd put everything in one region because I think it's
> easier to understand, and because it's much cheaper to create new hash
> objects in a region than it is to create new regions. Though if you want to
> see my hidden agenda, look ahead:
>
> if I were to redesign data storage in the Redis adapter, I think I
> would do away with the separate region per type and a metadata region that
> just stores types, and implement the whole thing as one region that stored
> collections of all the types we support. During the lookup we could catch a
> `ClassCastException` if the key is the wrong type, and then we'd propagate
> that up as the same error Redis throws when you try to modify a key that is
> of the wrong type.
>
> Storing collections as Java objects rather than via some translation
> to a Region means that as the objects get larger, the cost of transferring
> them between members in the system increases as well. Geode contains a
> `Delta` interface that I think we could use to avoid the overhead of
> transferring the whole object every time. Then the only downside is that we
> can't scale a single hash/set/list across servers, which I think is fine --
> do you really need to store a list in Redis that takes more than however
> many gigabytes of RAM are in a single Geode instance? Some folks on the
> user/dev lists seem to disagree with this view, though, so take it with a
> bit of sa

Re: ReflectionBasedAutoSerializer by default?

2017-03-27 Thread Wes Williams
Most new customers just want Geode/ GemFire to work - easily - because most
projects don't require dealing with PDX. In fact, some explicitly tell me
that they don't want to know about PDX because it's a distraction for the
simple use-cases. This is, of course, with the caveat that unless they need
it. But it is not to Geode's advantage if a new user gets a proprietary PDX
object and has to do research to figure out what to do with it.

However, I definitely disagree with this...

*>  I think that customers would be ok to run PdxInstance.getObject() to
get their pojos when required.*

On Mon, Mar 27, 2017 at 4:32 PM, John Blum  wrote:

> I generally don't see a problem with this from the *Spring* side, i.e. SDG
> does not care as long as the "default" PdxSerializer (e.g.
> ReflectionBasedAutoSerializer or whatever) is "overridable" before cache
> initialization!
>
> However, I definitely disagree with this...
>
> *>  I think that customers would be ok to run PdxInstance.getObject() to
> get their pojos when required.*
>
> IMO, I think (PDX) serialization really ought be treated as an "internal"
> concern and really should NOT be exposed to users.  This is more of an
> implementation detail than anything else, and in an application, as a
> developer, I can tell you what I want to manipulate my POJOs.  You do not
> see Hibernate or other mapping/persistent frameworks having you deal with
> the "internal" classes to make an entity "persistent" (i.e. attached vs.
> non-attached, etc).
>
> Longer term, I think the serialization mechanics should be "configurable",
> being able to plugin whatever serialization framework (e.g. Avro, Protobuf)
> makes sense for my application.
>
> $0.02
> -j
>
>
>
> On Mon, Mar 27, 2017 at 1:15 PM, Udo Kohlmeyer 
> wrote:
>
> > +1 I think we should make the default serialization mechanism to be PDX.
> > We should not concern ourselves with Java serialization at all anymore.
> >
> > +1 pdx-read-serialization=true ... I think we should prefer PdxInstance
> > objects over customer POJOs. I think that customers would be ok to run
> > PdxInstance.getObject() to get their pojos when required. But maybe to
> > start having customers use a more "optimal" serialization mechanism and
> > approach, a slight nudge into the PDXInstance direction is not that bad.
> >
> > In addition to that, it might help if our own code expect PdxInstances
> > over Pojos.
> >
> > --Udo
> >
> >
> > On 3/27/17 12:58, Swapnil Bawaskar wrote:
> >
> >> I believe it would be much better user experience if we just serialized
> >> user's domain object without requiring the user to configure anything.
> >> Currently, we require that the user specify that they want to use the
> >> ReflectionBasedAutoSerializer and the pattern that matches the domain
> >> objects.
> >>
> >> Looking at this code
> >>  >> 9626a61a90bd3803c2/geode-core/src/main/java/org/apache/
> >> geode/pdx/internal/AutoSerializableManager.java#L213>
> >> it
> >> looks like the pattern can be made optional. Also, we can go ahead and
> >> configure ReflectionBasedAutoSerializer to be set by default on Cache
> >> startup (if one is not specified already). We should also set
> >> pdx-read-serialized to true in this case.
> >> For advanced use cases where the user wishes to exclude certain fields,
> >> they can specify the pattern.
> >> If the users are using DataSerializable, that should still take
> precedence
> >> over PDX, so we won't break existing users.
> >>
> >> Are there any major concerns around this approach?
> >>
> >> Thanks!
> >> Swapnil.
> >>
> >>
> >
>
>
> --
> -John
> john.blum10101 (skype)
>


Re: copy files to servers

2017-01-08 Thread Wes Williams
This seems dangerous to me as a client could send malicious code for
execution to all servers.
Geode requires an administrative step to do gfsh> deploy --jar=..., that
provides a control step that theoretically should check a malicious client.

As for third-party jars, they do the same as what we're discussing:

> "Our suggestion is to include all 3rd party libraries into class path of
> every node. This can be achieved by copying your JAR files into the Ignite
> libs folder."


​As for the dynamical loading and running of Spring jars, let's say that
​timeline pressures and money outweigh a small, temporary network
saturation as we distribute Spring jars to the nodes.  We then get to the
dynamic loading:

If class is not locally available, then a request will be sent to the
> originating node to provide class definition. Originating node will send
> class byte-code definition and the class will be loaded on the worker node.
> This happens only once per class - once class definition is loaded on a
> node, it will never have to be loaded again.


I think it's practical.  However, iIsn't this potentially dangerous as a
client could intentionally inject malicious code into the grid?  Whereas
gfsh> deploy --jar=... gives Operations a control check point that
theoretically can thwart such an attack.



*Wes Williams | Pivotal Advisory **Data Engineer*
781.606.0325
http://pivotal.io/big-data/pivotal-gemfire

On Fri, Jan 6, 2017 at 8:46 PM, Roman Shaposhnik 
wrote:

> Btw, I'm sure a comparison of capabilities with Ignite will come up at
> some point. So here's what
> they do in this department (which I personally find really cool):
>http://apacheignite.gridgain.org/v1.0/docs/zero-deployment
>
> Thanks,
> Roman.
>
> On Fri, Jan 6, 2017 at 12:11 PM, Anthony Baker  wrote:
> > Hmmm, I agree with Udo.  I’d like to push a new version of my
> application with a single idempotent command.  The server should be smart
> enough to figure out what's in my bundle and understand how to deploy it
> including any dependencies (because who writes dependency-free code?).
> >
> > I do want some lifecycle hooks to alloc/free resources.  This seems
> conceptually similar to the “war” model which is pretty familiar to most
> Java devs.
> >
> > Anthony
> >
> >> On Jan 6, 2017, at 11:37 AM, Udo Kohlmeyer 
> wrote:
> >>
> >> In some ways that is a great idea but sometimes too explicit... Do
> we expect them to have fine grained jars?
> >> Also how do we handle dependencies as a single util class might be
> used by both a cache-listener and a partition listener... is the
> expectation that we update the dependent util class for one but not the
> other
> >>
> >> It's a very grey area
> >>
> >> On 1/6/17 11:19, John Blum wrote:
> >>> How about...
> >>>
> >>> * deploy function
> >>> * deploy cache-listener
> >>> * deploy cache-loader
> >>> * deploy cache-loader
> >>> * deploy resource (jar, xml, properties, etc)
> >>> * etc.
> >>>
> >>> Might as was make it explicit.  For instance, I may have a JAR file I
> just
> >>> deployed (uploaded) that contains Functions, Listeners, Loaders,
> Writers,
> >>> etc but I only want to deploy functions.
> >>>
> >>> Having 1 uber "deploy" command with many options gets cumbersome.
> >>>
> >>> It is a simple matter to introduce multiple command but have those
> commands
> >>> share similar logic.  This would also enable different workflows for
> >>> different commands in a more non-convoluted, maintainable way.
> >>>
> >>> These could be matched with corresponding `undeploy` commands.
> >>>
> >>> Food for thought,
> >>>
> >>> John
> >>>
> >>>
> >>>
> >>> On Fri, Jan 6, 2017 at 11:11 AM, Kirk Lund  wrote:
> >>>
> >>>> With appropriate constraints, a copy file type command could be
> secure.
> >>>>
> >>>> 1) don't use Apache Geode without security AND make the command
> require
> >>>> authorization permissions
> >>>> 2) limit the target directory to a directory under the working
> directory of
> >>>> the remote server
> >>>> 3) rename it to "deploy resource" so people don't expect it to copy
> to an
> >>>> arbitrary target directory on the remote machine
> >>>>
> >>>> Back to "deploy jar":
> &

[jira] [Created] (GEODE-2570) Export cluster-configuration returns "GemFire error" message

2017-03-01 Thread Wes Williams (JIRA)
Wes Williams created GEODE-2570:
---

 Summary: Export cluster-configuration returns "GemFire error" 
message
 Key: GEODE-2570
 URL: https://issues.apache.org/jira/browse/GEODE-2570
 Project: Geode
  Issue Type: Bug
  Components: configuration, gfsh
Reporter: Wes Williams
 Fix For: 1.2.0, 1.1.0


gfsh>version
1.1.0

gfsh>start locator --name=locator1 --port=10334 
--properties-file=config/locator.properties 
--load-cluster-configuration-from-dir=true --initial-heap=256m --max-heap=256m

gfsh>start server --name=server1 --server-port=0 
--properties-file=config/gemfire.properties --initial-heap=1g --max-heap=1g

gfsh>list regions
No Regions Found

gfsh>list members
  Name   | Id
 | 
locator1 | 192.168.0.5(locator1:43398:locator):1024
server1  | 192.168.0.5(server1:43404):1025

gfsh>create region --name=Test --type=PARTITION
Member  | Status
--- | ---
server1 | Region "/Test" created on "server1"

gfsh>export cluster-configuration --zip-file-name=test.zip
Could not process command due to GemFire error. Error while processing command 
 Reason : null

[info 2017/03/01 16:27:55.414 EST locator1  
tid=0x6c] (tid=108 msgId=72) Could not execute "export cluster-configuration 
--zip-file-name=test.zip".
java.lang.NullPointerException
at 
org.apache.geode.management.internal.cli.commands.ExportImportClusterConfigurationCommands.exportSharedConfig(ExportImportClusterConfigurationCommands.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
at 
org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
at 
org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:117)
at 
org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
at 
org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
at 
org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1639)
at 
org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
at 
org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at 
com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193)
at 
com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175)
at 
com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117)
at 
com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54)
at 
com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252)
at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819)
at 
com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801)
at 
javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1466)
at 
javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:76)
at 
javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1307)
at 
javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1399)
at 
javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:828)
at sun.reflect

[jira] [Commented] (GEODE-2570) Export cluster-configuration returns "GemFire error" message

2017-03-07 Thread Wes Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15900055#comment-15900055
 ] 

Wes Williams commented on GEODE-2570:
-

This is a "broken window" that gives a product the perception that it is 
difficult to learn.  The resolution should include a fix where if a path is not 
supplied, then Geode explicitly supplies the ./ 



> Export cluster-configuration returns "GemFire error" message
> 
>
> Key: GEODE-2570
> URL: https://issues.apache.org/jira/browse/GEODE-2570
> Project: Geode
>  Issue Type: Bug
>  Components: configuration, gfsh
>Reporter: Wes Williams
> Fix For: 1.1.0, 1.2.0
>
>
> gfsh>version
> 1.1.0
> gfsh>start locator --name=locator1 --port=10334 
> --properties-file=config/locator.properties 
> --load-cluster-configuration-from-dir=true --initial-heap=256m --max-heap=256m
> gfsh>start server --name=server1 --server-port=0 
> --properties-file=config/gemfire.properties --initial-heap=1g --max-heap=1g
> gfsh>list regions
> No Regions Found
> gfsh>list members
>   Name   | Id
>  | 
> locator1 | 192.168.0.5(locator1:43398:locator):1024
> server1  | 192.168.0.5(server1:43404):1025
> gfsh>create region --name=Test --type=PARTITION
> Member  | Status
> --- | ---
> server1 | Region "/Test" created on "server1"
> gfsh>export cluster-configuration --zip-file-name=test.zip
> Could not process command due to GemFire error. Error while processing 
> command  Reason : null
> [info 2017/03/01 16:27:55.414 EST locator1  Connection(6)-192.168.0.5> tid=0x6c] (tid=108 msgId=72) Could not execute 
> "export cluster-configuration --zip-file-name=test.zip".
> java.lang.NullPointerException
>   at 
> org.apache.geode.management.internal.cli.commands.ExportImportClusterConfigurationCommands.exportSharedConfig(ExportImportClusterConfigurationCommands.java:85)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:216)
>   at 
> org.apache.geode.management.internal.cli.remote.RemoteExecutionStrategy.execute(RemoteExecutionStrategy.java:91)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandProcessor.executeCommand(CommandProcessor.java:117)
>   at 
> org.apache.geode.management.internal.cli.remote.CommandStatementImpl.process(CommandStatementImpl.java:71)
>   at 
> org.apache.geode.management.internal.cli.remote.MemberCommandService.processCommand(MemberCommandService.java:52)
>   at 
> org.apache.geode.management.internal.beans.MemberMBeanBridge.processCommand(MemberMBeanBridge.java:1639)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:404)
>   at 
> org.apache.geode.management.internal.beans.MemberMBean.processCommand(MemberMBean.java:397)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
>   at 
> com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193)
>   at 
> com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175)
>   at 
> com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117)
>   at 
> com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54)
>   at 
> com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
>   at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138)
>   at com.sun.

[jira] [Updated] (GEODE-2722) ReflectionBasedAutoSerializer should be used by default

2017-03-27 Thread Wes Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wes Williams updated GEODE-2722:




“So the default pattern would match all class names?”

This is the most usual case and I believe should therefore be the default.  
Placing limitations is the exceptional case.  It’s rather frequent during 
development that I’ll change my package names and the serializer will blow up 
because I forgot to change the pattern. In the common case, it’s really a 
nuisance.







From: Darrel Schneider (JIRA)<mailto:j...@apache.org>
Sent: Monday, March 27, 2017 8:51 PM
To: dev@geode.apache.org<mailto:dev@geode.apache.org>
Subject: [jira] [Commented] (GEODE-2722) ReflectionBasedAutoSerializer should 
be used by default



[ 
https://issues.apache.org/jira/browse/GEODE-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15944321#comment-15944321
 ]

Darrel Schneider commented on GEODE-2722:
-

So the default pattern would match all class names?




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


> ReflectionBasedAutoSerializer should be used by default
> ---
>
> Key: GEODE-2722
> URL: https://issues.apache.org/jira/browse/GEODE-2722
> Project: Geode
>  Issue Type: Improvement
>  Components: serialization
>Reporter: Swapnil Bawaskar
>
> We should not require the user to configure anything when inserting data in 
> Geode. ReflectionBasedAutoSerializer should be set by default on Cache
> startup (if one is not specified already). 
> Also, the pattern required to configure ReflectionBasedAutoSerializer should 
> be made optional: Please see:
> <https://github.com/apache/geode/blob/8bf39571471642beaaa36c9626a61a90bd3803c2/geode-core/src/main/java/org/apache/geode/pdx/internal/AutoSerializableManager.java#L213>
> Please look at this thread for discussion on the dev list: 
> https://lists.apache.org/thread.html/c89974d08d7675d3a636872bce5e838d578df5759c5c1acae611d322@%3Cdev.geode.apache.org%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (GEODE-2725) export logs does not honor --dir

2017-03-28 Thread Wes Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945582#comment-15945582
 ] 

Wes Williams commented on GEODE-2725:
-

The --dir option works but you need to say --dir=./tmp.  Still, the program 
should default to the current directory and not the Geode install directory.

> export logs does not honor --dir
> 
>
> Key: GEODE-2725
> URL: https://issues.apache.org/jira/browse/GEODE-2725
> Project: Geode
>  Issue Type: Sub-task
>  Components: gfsh, logging
>Reporter: Swapnil Bawaskar
>
> When connected to locator via jmx, run the following command:
> {noformat}
> gfsh>export logs --dir=tmp
> {noformat}
> Observer that the dir option is ignored:
> {noformat}
> Logs exported to the connected member's file system: 
> /Users/sbawaskar/apache/geode/geode-assembly/build/install/apache-geode/loc1/exportedLogs_1490721273215.zip
> {noformat}
> The --tmp option is honored when connected via http.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2270) Need API to call gfsh and get results dynamically from code

2017-01-05 Thread Wes Williams (JIRA)
Wes Williams created GEODE-2270:
---

 Summary: Need API to call gfsh and get results dynamically from 
code
 Key: GEODE-2270
 URL: https://issues.apache.org/jira/browse/GEODE-2270
 Project: Geode
  Issue Type: Improvement
  Components: gfsh
Reporter: Wes Williams
 Fix For: 1.1.0


GIVEN:
1) The GfshParser and CommandResult are internal classes.
2) CommandResult returns headings, line.separator's and UI concerns along with 
the answer

WHEN:
I pass a gfsh command into a public gfsh API from code

THEN:
I get back an XML representation of the core results without the headings, 
line.separator's and UI concerns

EXAMPLE (idea node and not actual implementation):
WHEN:
String gfshResults = gfshPublicAPI("list regions");
return gfshResults;

String gfshPublicAPI(String gfshCommand) {
ParseResult parseResult = gfshParser.parse(gfshCommand);
XmlResult results = (XmlResult) parseResult.getMethod()+"Xml"
  .invoke(parseResult.getInstance(), parseResult.getArguments())
   return results;
}

CommandResult gfshInternalAPI(String gfshCommand) {
ParseResult parseResult = gfshParser.parse(gfshCommand);
CommandResult results = (CommandResult) parseResult.getMethod()
  .invoke(parseResult.getInstance(), parseResult.getArguments())
   return results;
}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-2270) Need API to call gfsh and get results dynamically from code

2017-01-05 Thread Wes Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801659#comment-15801659
 ] 

Wes Williams commented on GEODE-2270:
-

There was a thread on this topic in the geode dev mailing list on this.  
Another option is to include a --output=json attribute in gfsh per Anthony 
Baker's suggestion here:

https://mail-archives.apache.org/mod_mbox/incubator-geode-dev/201611.mbox/ajax/%3C1895B674-A87E-453B-BC39-E464B16C4678%40pivotal.io%3E

> Need API to call gfsh and get results dynamically from code
> ---
>
> Key: GEODE-2270
> URL: https://issues.apache.org/jira/browse/GEODE-2270
> Project: Geode
>  Issue Type: Improvement
>  Components: gfsh
>Reporter: Wes Williams
> Fix For: 1.1.0
>
>
> GIVEN:
> 1) The GfshParser and CommandResult are internal classes.
> 2) CommandResult returns headings, line.separator's and UI concerns along 
> with the answer
> WHEN:
> I pass a gfsh command into a public gfsh API from code
> THEN:
> I get back an XML representation of the core results without the headings, 
> line.separator's and UI concerns
> EXAMPLE (idea node and not actual implementation):
> WHEN:
> String gfshResults = gfshPublicAPI("list regions");
> return gfshResults;
> String gfshPublicAPI(String gfshCommand) {
> ParseResult parseResult = gfshParser.parse(gfshCommand);
> XmlResult results = (XmlResult) parseResult.getMethod()+"Xml"
>   .invoke(parseResult.getInstance(), parseResult.getArguments())
>return results;
> }
> CommandResult gfshInternalAPI(String gfshCommand) {
> ParseResult parseResult = gfshParser.parse(gfshCommand);
> CommandResult results = (CommandResult) parseResult.getMethod()
>   .invoke(parseResult.getInstance(), parseResult.getArguments())
>return results;
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (GEODE-2270) Need API to call gfsh and get results dynamically from code

2017-01-05 Thread Wes Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wes Williams updated GEODE-2270:

Description: 
GIVEN:
1) The GfshParser and CommandResult are internal classes.
2) CommandResult returns headings, line.separator's and UI concerns along with 
the answer

WHEN:
I pass a gfsh command into a public gfsh API from code

THEN:
I get back an XML or JSON representation of the core results without the 
headings, line.separator's and UI concerns

EXAMPLE (idea node and not actual implementation):
WHEN:
String gfshResults = gfshPublicAPI("list regions");
return gfshResults;

String gfshPublicAPI(String gfshCommand) {
ParseResult parseResult = gfshParser.parse(gfshCommand);
String results = (String) parseResult.getMethod()+"Json"
  .invoke(parseResult.getInstance(), parseResult.getArguments())
   return results;
}

CommandResult gfshInternalAPI(String gfshCommand) {
ParseResult parseResult = gfshParser.parse(gfshCommand);
CommandResult results = (CommandResult) parseResult.getMethod()
  .invoke(parseResult.getInstance(), parseResult.getArguments())
   return results;
}


Another option is to include a new property --output=json into gfsh commands 
that return the core results without the UI concerns, per 
https://mail-archives.apache.org/mod_mbox/incubator-geode-dev/201611.mbox/browser

  was:
GIVEN:
1) The GfshParser and CommandResult are internal classes.
2) CommandResult returns headings, line.separator's and UI concerns along with 
the answer

WHEN:
I pass a gfsh command into a public gfsh API from code

THEN:
I get back an XML representation of the core results without the headings, 
line.separator's and UI concerns

EXAMPLE (idea node and not actual implementation):
WHEN:
String gfshResults = gfshPublicAPI("list regions");
return gfshResults;

String gfshPublicAPI(String gfshCommand) {
ParseResult parseResult = gfshParser.parse(gfshCommand);
XmlResult results = (XmlResult) parseResult.getMethod()+"Xml"
  .invoke(parseResult.getInstance(), parseResult.getArguments())
   return results;
}

CommandResult gfshInternalAPI(String gfshCommand) {
ParseResult parseResult = gfshParser.parse(gfshCommand);
CommandResult results = (CommandResult) parseResult.getMethod()
  .invoke(parseResult.getInstance(), parseResult.getArguments())
   return results;
}



> Need API to call gfsh and get results dynamically from code
> ---
>
> Key: GEODE-2270
> URL: https://issues.apache.org/jira/browse/GEODE-2270
> Project: Geode
>      Issue Type: Improvement
>  Components: gfsh
>Reporter: Wes Williams
> Fix For: 1.1.0
>
>
> GIVEN:
> 1) The GfshParser and CommandResult are internal classes.
> 2) CommandResult returns headings, line.separator's and UI concerns along 
> with the answer
> WHEN:
> I pass a gfsh command into a public gfsh API from code
> THEN:
> I get back an XML or JSON representation of the core results without the 
> headings, line.separator's and UI concerns
> EXAMPLE (idea node and not actual implementation):
> WHEN:
> String gfshResults = gfshPublicAPI("list regions");
> return gfshResults;
> String gfshPublicAPI(String gfshCommand) {
> ParseResult parseResult = gfshParser.parse(gfshCommand);
> String results = (String) parseResult.getMethod()+"Json"
>   .invoke(parseResult.getInstance(), parseResult.getArguments())
>return results;
> }
> CommandResult gfshInternalAPI(String gfshCommand) {
> ParseResult parseResult = gfshParser.parse(gfshCommand);
> CommandResult results = (CommandResult) parseResult.getMethod()
>   .invoke(parseResult.getInstance(), parseResult.getArguments())
>return results;
> }
> Another option is to include a new property --output=json into gfsh commands 
> that return the core results without the UI concerns, per 
> https://mail-archives.apache.org/mod_mbox/incubator-geode-dev/201611.mbox/browser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (GEODE-2270) Need API to call gfsh and get results dynamically from code

2017-01-05 Thread Wes Williams (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wes Williams updated GEODE-2270:

Description: 
GIVEN:
1) The GfshParser and CommandResult are internal classes.
2) CommandResult returns headings, line.separator's and UI concerns along with 
the answer

WHEN:
I pass a gfsh command into a public gfsh API from code

THEN:
I get back an XML or JSON representation of the core results without the 
headings, line.separator's and UI concerns

EXAMPLE (idea node and not actual implementation):
WHEN:
String gfshResults = gfshPublicAPI("list regions");
return gfshResults;

String gfshPublicAPI(String gfshCommand) {
ParseResult parseResult = gfshParser.parse(gfshCommand);
String results = (String) parseResult.getMethod()+"Json"
  .invoke(parseResult.getInstance(), parseResult.getArguments())
   return results;
}

CommandResult gfshInternalAPI(String gfshCommand) {
ParseResult parseResult = gfshParser.parse(gfshCommand);
CommandResult results = (CommandResult) parseResult.getMethod()
  .invoke(parseResult.getInstance(), parseResult.getArguments())
   return results;
}


Another option is to include a new property --output=json into gfsh commands 
that return the core results without the UI concerns, per Anthony Baker's 
comment on Fri, 04 Nov, 20:31 at: 
https://mail-archives.apache.org/mod_mbox/incubator-geode-dev/201611.mbox/browser

  was:
GIVEN:
1) The GfshParser and CommandResult are internal classes.
2) CommandResult returns headings, line.separator's and UI concerns along with 
the answer

WHEN:
I pass a gfsh command into a public gfsh API from code

THEN:
I get back an XML or JSON representation of the core results without the 
headings, line.separator's and UI concerns

EXAMPLE (idea node and not actual implementation):
WHEN:
String gfshResults = gfshPublicAPI("list regions");
return gfshResults;

String gfshPublicAPI(String gfshCommand) {
ParseResult parseResult = gfshParser.parse(gfshCommand);
String results = (String) parseResult.getMethod()+"Json"
  .invoke(parseResult.getInstance(), parseResult.getArguments())
   return results;
}

CommandResult gfshInternalAPI(String gfshCommand) {
ParseResult parseResult = gfshParser.parse(gfshCommand);
CommandResult results = (CommandResult) parseResult.getMethod()
  .invoke(parseResult.getInstance(), parseResult.getArguments())
   return results;
}


Another option is to include a new property --output=json into gfsh commands 
that return the core results without the UI concerns, per 
https://mail-archives.apache.org/mod_mbox/incubator-geode-dev/201611.mbox/browser


> Need API to call gfsh and get results dynamically from code
> ---
>
> Key: GEODE-2270
> URL: https://issues.apache.org/jira/browse/GEODE-2270
> Project: Geode
>      Issue Type: Improvement
>  Components: gfsh
>Reporter: Wes Williams
> Fix For: 1.1.0
>
>
> GIVEN:
> 1) The GfshParser and CommandResult are internal classes.
> 2) CommandResult returns headings, line.separator's and UI concerns along 
> with the answer
> WHEN:
> I pass a gfsh command into a public gfsh API from code
> THEN:
> I get back an XML or JSON representation of the core results without the 
> headings, line.separator's and UI concerns
> EXAMPLE (idea node and not actual implementation):
> WHEN:
> String gfshResults = gfshPublicAPI("list regions");
> return gfshResults;
> String gfshPublicAPI(String gfshCommand) {
> ParseResult parseResult = gfshParser.parse(gfshCommand);
> String results = (String) parseResult.getMethod()+"Json"
>   .invoke(parseResult.getInstance(), parseResult.getArguments())
>return results;
> }
> CommandResult gfshInternalAPI(String gfshCommand) {
> ParseResult parseResult = gfshParser.parse(gfshCommand);
> CommandResult results = (CommandResult) parseResult.getMethod()
>   .invoke(parseResult.getInstance(), parseResult.getArguments())
>return results;
> }
> Another option is to include a new property --output=json into gfsh commands 
> that return the core results without the UI concerns, per Anthony Baker's 
> comment on Fri, 04 Nov, 20:31 at: 
> https://mail-archives.apache.org/mod_mbox/incubator-geode-dev/201611.mbox/browser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (GEODE-2268) Store jar bytes in cluster configuration region

2017-01-06 Thread Wes Williams (JIRA)

[ 
https://issues.apache.org/jira/browse/GEODE-2268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15804708#comment-15804708
 ] 

Wes Williams commented on GEODE-2268:
-

I think it simpler to keep the jars on the file system because it is a visual 
aid to inspect the cluster externally without having to log into gfsh and 
gfsh>deploy jar. Seeing the deployed jars helps when validating deployments and 
things are not right, for instance, when one member has the jar but another 
does not, etc. Placing them internally is arguably not simpler.

> Store jar bytes in cluster configuration region
> ---
>
> Key: GEODE-2268
> URL: https://issues.apache.org/jira/browse/GEODE-2268
> Project: Geode
>  Issue Type: Sub-task
>  Components: management
>Reporter: Jinmei Liao
>
> Currently xml and properties are stored in an internal cluster configuration 
> region.  However, for jar files only the name is stored in this region, while 
> the jar bytes are stored in the filesystem of each locator.  We should 
> simplify things by storing the jar bytes in the same internal region.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)