Sounds good to me. +1

> On Sep 18, 2017, at 11:36 AM, David Kimura <dkim...@pivotal.io> wrote:
> 
> Application code doesn't need to invoke move (and probably shouldn't).  In
> the examples, I think it's more of an exercise to show what would happen if
> invoked.  And really, I expect same badness to happen using shared pointer
> method if we dereferenced a moved pointer.
> 
> Thanks,
> David
> 
> On Mon, Sep 18, 2017 at 11:19 AM, Mark Hanson <mhan...@pivotal.io> wrote:
> 
>> If you think this is a significant ease of use benefit, why have to invoke
>> move? I understand this code, but I have not seen it in recent memory.
>> I agree that this may make use of a copy(move) constructor easier, but...
>> 
>> If the consensus is that this is a significant ease of use benefit for the
>> user, I am on board…
>> 
>> +1
>> 
>> Thanks,
>> Mark
>>> On Sep 18, 2017, at 11:15 AM, David Kimura <dkim...@pivotal.io> wrote:
>>> 
>>> The benefit I see isn't for performance, it's flexibility and ease of use
>>> by application developer.  Anything we can do to achieve this, I think
>> is a
>>> significant win.
>>> 
>>> I think the reason for the various approaches is to determine whether we
>>> can safely create the simple/flexible API without incurring a penalty on
>>> performance.  Personally, I think the no nullptr benefit that Sarge
>>> mentioned in previous thread is enough to warrant this approach serious
>>> thought even though it provides no performance benefit.
>>> 
>>> Thanks,
>>> David
>>> 
>>> On Mon, Sep 18, 2017 at 10:47 AM, Mark Hanson <mhan...@pivotal.io>
>> wrote:
>>> 
>>>> Hi All,
>>>> 
>>>> As we are creating a user API, unless there is a significant performance
>>>> benefit, I think we should try to take the simpler route.
>>>> 
>>>> I see benefit to the approach being proposed, but I don’t see a
>>>> significant benefit. Someone please school me if I am wrong.
>>>> 
>>>> I am still in the shared pointer camp for cache and a raw pointer to
>> cache
>>>> in cacheImpl.
>>>> 
>>>> Sorry, and thanks,
>>>> Mark
>>>> 
>>>> 
>>>> 
>>>>> On Sep 18, 2017, at 10:14 AM, David Kimura <dkim...@pivotal.io> wrote:
>>>>> 
>>>>> +1 value approach (via some implementation from this thread)
>>>>> 
>>>>> I think I like this.
>>>>> 
>>>>> In all BAD cases, it's the user who shot themselves in the foot by
>> using
>>>>> std::move unsafely.  I expect this behavior is the same behavior as for
>>>> any
>>>>> other object.  And if we're ever able to get rid of the Cache/CacheImpl
>>>>> circular dependency then we can add a copy constructor.
>>>>> 
>>>>> I also like the idea of passing in a CacheConfig.  My concern though is
>>>>> that it's piling on another big change.
>>>>> 
>>>>> Thanks,
>>>>> David
>>>>> 
>>>>> On Sun, Sep 17, 2017 at 12:02 AM, Jacob Barrett <jbarr...@pivotal.io>
>>>> wrote:
>>>>> 
>>>>>> Ok, one more idea.
>>>>>> https://gist.github.com/pivotal-jbarrett/
>> beed5f70c1f3a238cef94832b13dab
>>>> 7a
>>>>>> 
>>>>>> The biggest issue with the value model is that we have been using a
>>>> factory
>>>>>> to build the Cache object. We really don't need one and if we get rid
>>>> of it
>>>>>> things look much better. They aren't perfect since we still need the
>>>> back
>>>>>> pointer from the impl to the cache for later use. If we didn't need
>> that
>>>>>> then we could allow copy construction. As it stands right now this
>>>> version
>>>>>> allows value, shared_prt, unique_ptr, etc. without any real overhead
>> or
>>>> RVO
>>>>>> issues.
>>>>>> 
>>>>>> The big change is that, rather than have a factory that we set a bunch
>>>> of
>>>>>> values on and then ask it to create the Cache, we create a CacheConfig
>>>>>> object and pass that in to the Cache's constructor. Cache passes it to
>>>>>> CacheImpl and CacheImpl sets itself up based on the config. If you
>> look
>>>> at
>>>>>> what the current factory model does it isn't that different. For
>>>> clarity I
>>>>>> added an XmlCacheConfig object to that builds up the CacheConfig via
>>>> Xml.
>>>>>> You could imagine a YamlCacheConfig object *shiver*. The point is we
>>>> don't
>>>>>> care as long as we get a CacheConfig with all the things we support at
>>>>>> "init" time.
>>>>>> 
>>>>>> I know it is a more radical change but I feel it is more C++ and more
>>>>>> testable than the factory model. I also like that it solves some of
>> the
>>>>>> issues with the value model we were looking at.
>>>>>> 
>>>>>> -Jake
>>>>>> 
>>>>>> On Thu, Sep 14, 2017 at 5:16 PM Jacob Barrett <jbarr...@pivotal.io>
>>>> wrote:
>>>>>> 
>>>>>>> Y'all here is an attempt to get the best of both worlds.
>>>>>>> https://gist.github.com/pivotal-jbarrett/
>>>> 52ba9ec5de0b494368d1c5282ef188
>>>>>> ef
>>>>>>> 
>>>>>>> I thought I would try posting to Gist but so far not impressed,
>> sorry.
>>>>>>> 
>>>>>>> The Gist of it is that we can overcome the thirdpary or transient
>>>>>>> reference back to the public Cache instance by keeping a reference to
>>>> it
>>>>>> in
>>>>>>> the implementation instance and updating it whenever the move
>>>> constructor
>>>>>>> is called.
>>>>>>> 
>>>>>>> The downside is if your run this test it doesn't show RVO kicking in
>> on
>>>>>>> the second test where we move the value into a shared pointer. There
>>>> are
>>>>>> a
>>>>>>> couple of pitfalls you can stumble into as well by trying to used the
>>>>>>> previous instance to access the cache after the move operation, as
>>>>>>> illustrated by the "BAD" commented lines.
>>>>>>> 
>>>>>>> The upside is the choices this gives the use for ownership of their
>>>>>>> factory constructed Cache instance. They can keep it a value or move
>> it
>>>>>> to
>>>>>>> unique or shared pointer.
>>>>>>> 
>>>>>>> Overhead wise I think we better off in value as long as there are no
>>>>>>> moves, rare I would thing, but the moves are pretty cheap at the
>> point
>>>>>>> since we only maintain a unique_ptr. After moving into a shared_ptr
>> it
>>>>>> acts
>>>>>>> identical to the shared_ptr model proposed earlier.
>>>>>>> 
>>>>>>> -Jake
>>>>>>> 
>>>>>>> 
>>>>>>> On Thu, Sep 14, 2017 at 3:36 PM Michael Martell <mmart...@pivotal.io
>>> 
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> Late to this party.
>>>>>>>> 
>>>>>>>> Confession 1: I had to look up both RVO and copy-elision.
>>>>>>>> Confession 2: I don't like using pointers at all. I used to, but
>> I've
>>>>>>>> evolved to just use C# and Java :)
>>>>>>>> 
>>>>>>>> Without investing a lot more time, I don't have strong feelings
>> about
>>>>>> raw
>>>>>>>> vs shared pointers. My only question is: Can we return ptr to
>> abstract
>>>>>>>> class everywhere we return objects? Just thinking of mocking, which
>>>>>> always
>>>>>>>> wants to mock interfaces.
>>>>>>>> 
>>>>>>>> On Thu, Sep 14, 2017 at 2:25 PM, Michael William Dodge <
>>>>>> mdo...@pivotal.io
>>>>>>>>> 
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> +0 shared pointer
>>>>>>>>> 
>>>>>>>>>> On 14 Sep, 2017, at 14:09, Ernest Burghardt <
>> eburgha...@pivotal.io>
>>>>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>> Calling a vote for:
>>>>>>>>>> 
>>>>>>>>>> - Raw pointer
>>>>>>>>>> - shard pointer
>>>>>>>>>> 
>>>>>>>>>> +1 raw Pointer, I had to look up RVO and am new to std::move(s)
>>>>>>>>>> 
>>>>>>>>>> On Thu, Sep 14, 2017 at 3:02 PM, Michael William Dodge <
>>>>>>>>> mdo...@pivotal.io>
>>>>>>>>>> wrote:
>>>>>>>>>> 
>>>>>>>>>>> I generally dig reference-counted pointers for avoiding lifetime
>>>>>>>> issues
>>>>>>>>>>> with objects allocated off the heap but I can live with bare
>>>>>>>> pointers,
>>>>>>>>> too.
>>>>>>>>>>> 
>>>>>>>>>>> Sarge
>>>>>>>>>>> 
>>>>>>>>>>>> On 13 Sep, 2017, at 16:25, Mark Hanson <mhan...@pivotal.io>
>>>>>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> Hi All,
>>>>>>>>>>>> 
>>>>>>>>>>>> I favor the “pointer" approach that is identified in the code
>>>>>>>> sample.
>>>>>>>>>>> There is greater clarity and less bytes seemingly created and
>>>>>>>> written.
>>>>>>>>> We
>>>>>>>>>>> do sacrifice the potential ease of using an object, but in all, I
>>>>>>>> think
>>>>>>>>> the
>>>>>>>>>>> way our code is structured. It is not conducive to do a value
>>>>>>>> approach,
>>>>>>>>>>> from an efficiency standpoint,  because of our use of the pimpl
>>>>>>>> model.
>>>>>>>>>>>> 
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Mark
>>>>>>>>>>>> 
>>>>>>>>>>>>> On Sep 12, 2017, at 11:09 AM, Jacob Barrett <
>> jbarr...@pivotal.io
>>>>>>> 
>>>>>>>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> My biggest concern with this model is that access to the public
>>>>>>>> Cache
>>>>>>>>>>>>> object from other public objects results in additional
>>>>>> allocations
>>>>>>>> of
>>>>>>>>> a
>>>>>>>>>>>>> Cache value. Think about when we are inside a Serializable
>> object
>>>>>>>> and
>>>>>>>>> we
>>>>>>>>>>>>> access the Cache from DataOutput.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> As value:
>>>>>>>>>>>>> Serializable* MyClass::fromData(DataInput& dataInput) {
>>>>>>>>>>>>> auto cache = dataOutput.getCache();
>>>>>>>>>>>>> ...
>>>>>>>>>>>>> }
>>>>>>>>>>>>> In this the value of cache will RVO the allocation of Cache in
>>>>>> the
>>>>>>>>>>> getCache
>>>>>>>>>>>>> call into the stack of this method, great. The problem is that
>>>>>>>> Cache
>>>>>>>>>>> must
>>>>>>>>>>>>> contain a std::shared_ptr<CacheImpl> which means that each
>>>>>>>> allocation
>>>>>>>>>>> is 8
>>>>>>>>>>>>> bytes (pointer to control block and pointer to CacheImpl) as
>> well
>>>>>>>> as
>>>>>>>>>>> having
>>>>>>>>>>>>> to increment the strong counter in the control block. On
>>>>>>>> exit/descope,
>>>>>>>>>>> the
>>>>>>>>>>>>> Cache will have to decrement the control block as well.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Using current shared_ptr pimple model:
>>>>>>>>>>>>> Serializable* MyClass::fromData(DataInput& dataInput) {
>>>>>>>>>>>>> auto& cache = dataOutput.getCache();
>>>>>>>>>>>>> ...
>>>>>>>>>>>>> }
>>>>>>>>>>>>> We only suffer the ref allocation of 4 bytes and no ref count.
>>>>>>>> Since
>>>>>>>>>>> this
>>>>>>>>>>>>> function call can't survive past the lifespan of
>> Cache/CacheImpl
>>>>>>>> they
>>>>>>>>>>> don't
>>>>>>>>>>>>> need to have shared_ptr and refcounting.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Given that this method could be called numerous times is the
>>>>>>>> overhead
>>>>>>>>> of
>>>>>>>>>>>>> the value version going to be a significant performance issue?
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I worry that moves and RVO is just beyond most developers.
>> Heck I
>>>>>>>>> didn't
>>>>>>>>>>>>> know anything about it until we started exploring it.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> -Jake
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Tue, Sep 12, 2017 at 8:06 AM David Kimura <
>> dkim...@pivotal.io
>>>>>>> 
>>>>>>>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Follow up of attached discussion after more investigation.  I
>>>>>>>> created
>>>>>>>>>>> an
>>>>>>>>>>>>>> example of returning Cache as shared pointer versus raw value:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> https://github.com/dgkimura/geode-native-sandbox
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I still like returning by value as it lets the user do what
>> they
>>>>>>>> want
>>>>>>>>>>> with
>>>>>>>>>>>>>> their object.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> // Here user creates object on their stack.
>>>>>>>>>>>>>> auto c = CacheFactory::createFactory().create();
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> // Here user creates smart pointer in their heap.
>>>>>>>>>>>>>> auto cptr =
>>>>>>>>>>>>>> std::make_shared<Cache>(CacheFactory::createFactory().
>>>>>> create());
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Difficulty of implementing this is high due to circular
>>>>>>>> dependencies
>>>>>>>>> of
>>>>>>>>>>>>>> Cache/CacheImpl as well as objects hanging off CacheImpl that
>>>>>>>> return
>>>>>>>>>>>>>> Cache.  We must be extra careful when dealing with move/copy
>>>>>>>>> semantics
>>>>>>>>>>> of
>>>>>>>>>>>>>> Cache/CacheImpl.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Alternative, is to keep as is and only permit heap allocations
>>>>>>>> from
>>>>>>>>>>>>>> factory using shared pointers.
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> David
>>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>> 
>>>> 
>> 
>> 

Reply via email to