Late to this party.

Confession 1: I had to look up both RVO and copy-elision.
Confession 2: I don't like using pointers at all. I used to, but I've
evolved to just use C# and Java :)

Without investing a lot more time, I don't have strong feelings about raw
vs shared pointers. My only question is: Can we return ptr to abstract
class everywhere we return objects? Just thinking of mocking, which always
wants to mock interfaces.

On Thu, Sep 14, 2017 at 2:25 PM, Michael William Dodge <mdo...@pivotal.io>
wrote:

> +0 shared pointer
>
> > On 14 Sep, 2017, at 14:09, Ernest Burghardt <eburgha...@pivotal.io>
> wrote:
> >
> > Calling a vote for:
> >
> > - Raw pointer
> > - shard pointer
> >
> > +1 raw Pointer, I had to look up RVO and am new to std::move(s)
> >
> > On Thu, Sep 14, 2017 at 3:02 PM, Michael William Dodge <
> mdo...@pivotal.io>
> > wrote:
> >
> >> I generally dig reference-counted pointers for avoiding lifetime issues
> >> with objects allocated off the heap but I can live with bare pointers,
> too.
> >>
> >> Sarge
> >>
> >>> On 13 Sep, 2017, at 16:25, Mark Hanson <mhan...@pivotal.io> wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I favor the “pointer" approach that is identified in the code sample.
> >> There is greater clarity and less bytes seemingly created and written.
> We
> >> do sacrifice the potential ease of using an object, but in all, I think
> the
> >> way our code is structured. It is not conducive to do a value approach,
> >> from an efficiency standpoint,  because of our use of the pimpl model.
> >>>
> >>> Thanks,
> >>> Mark
> >>>
> >>>> On Sep 12, 2017, at 11:09 AM, Jacob Barrett <jbarr...@pivotal.io>
> >> wrote:
> >>>>
> >>>> My biggest concern with this model is that access to the public Cache
> >>>> object from other public objects results in additional allocations of
> a
> >>>> Cache value. Think about when we are inside a Serializable object and
> we
> >>>> access the Cache from DataOutput.
> >>>>
> >>>> As value:
> >>>> Serializable* MyClass::fromData(DataInput& dataInput) {
> >>>> auto cache = dataOutput.getCache();
> >>>> ...
> >>>> }
> >>>> In this the value of cache will RVO the allocation of Cache in the
> >> getCache
> >>>> call into the stack of this method, great. The problem is that Cache
> >> must
> >>>> contain a std::shared_ptr<CacheImpl> which means that each allocation
> >> is 8
> >>>> bytes (pointer to control block and pointer to CacheImpl) as well as
> >> having
> >>>> to increment the strong counter in the control block. On exit/descope,
> >> the
> >>>> Cache will have to decrement the control block as well.
> >>>>
> >>>> Using current shared_ptr pimple model:
> >>>> Serializable* MyClass::fromData(DataInput& dataInput) {
> >>>> auto& cache = dataOutput.getCache();
> >>>> ...
> >>>> }
> >>>> We only suffer the ref allocation of 4 bytes and no ref count. Since
> >> this
> >>>> function call can't survive past the lifespan of Cache/CacheImpl they
> >> don't
> >>>> need to have shared_ptr and refcounting.
> >>>>
> >>>> Given that this method could be called numerous times is the overhead
> of
> >>>> the value version going to be a significant performance issue?
> >>>>
> >>>> I worry that moves and RVO is just beyond most developers. Heck I
> didn't
> >>>> know anything about it until we started exploring it.
> >>>>
> >>>>
> >>>>
> >>>> -Jake
> >>>>
> >>>>
> >>>> On Tue, Sep 12, 2017 at 8:06 AM David Kimura <dkim...@pivotal.io>
> >> wrote:
> >>>>
> >>>>> Follow up of attached discussion after more investigation.  I created
> >> an
> >>>>> example of returning Cache as shared pointer versus raw value:
> >>>>>
> >>>>>  https://github.com/dgkimura/geode-native-sandbox
> >>>>>
> >>>>> I still like returning by value as it lets the user do what they want
> >> with
> >>>>> their object.
> >>>>>
> >>>>>  // Here user creates object on their stack.
> >>>>>  auto c = CacheFactory::createFactory().create();
> >>>>>
> >>>>>  // Here user creates smart pointer in their heap.
> >>>>>  auto cptr =
> >>>>> std::make_shared<Cache>(CacheFactory::createFactory().create());
> >>>>>
> >>>>> Difficulty of implementing this is high due to circular dependencies
> of
> >>>>> Cache/CacheImpl as well as objects hanging off CacheImpl that return
> >>>>> Cache.  We must be extra careful when dealing with move/copy
> semantics
> >> of
> >>>>> Cache/CacheImpl.
> >>>>>
> >>>>> Alternative, is to keep as is and only permit heap allocations from
> >>>>> factory using shared pointers.
> >>>>>
> >>>>> Thanks,
> >>>>> David
> >>>>>
> >>>
> >>
> >>
>
>

Reply via email to