Well articulated and a wise decision; Jake. +1

On Fri, Apr 5, 2019 at 8:24 AM Anthony Baker <aba...@pivotal.io> wrote:

> One question:  if I’m using thread-local connections ho does that affect
> pool sizing?  Are thread-local connections included in the overall pool
> size or accounted for separately?
>
> We may want some explicit release notes if a user would need to resize
> their pools during an upgrade.
>
> Anthony
>
>
> > On Apr 5, 2019, at 6:34 AM, Jacob Barrett <jbarr...@pivotal.io> wrote:
> >
> > Devs,
> >
> > The current connection pooling implementation contains a setting that
> enables a secondary pool that is thread local. See ClientCacheFactory.
> setPoolThreadLocalConnections method for details. This thread local pooling
> was added to reduce contention on the primary connection pool under high
> thread count or load. As of the completion of GEODE-6515 there is no longer
> a contention issue in the primary pool under high thread count or load,
> effectively rendering thread local pooling a no-op. Thread local pooling
> also introduces many complexities into the primary pool management of
> connections since a connection can either be in the primary pool or a pool
> for each thread. Connection idle timeout, lifetime expiration and load
> conditioning all have to work around thread local connections making for
> some interesting relationships and behavior. Additionally clients of thread
> local connections needed to call Pool.releaseThreadLocalConnections prior
> to the thread terminating or connections would be held until the GC
> finalized the thread local storage. Use of thread local connections
> typically mean significantly high connection counts on the servers since
> each thread could horde connections to each server regardless of current
> workload need.
> >
> > I propose that we deprecate all the public APIs for thread local
> connections and immediately remove the behavior. The API methods will go
> from an effective no-op to an actual no-op. A warning will be logged when
> ClientCacheFactory. setPoolThreadLocalConnections is used. Internally all
> thread local connection implementation will be removed. The net effect to
> the client will be the same as promised with thread local connections, that
> content on the primary pool is reduced for high thread count and load.
> Additionally, the connection count to each server will be reduced since
> threads won’t be holding connections to servers they are not actively
> communicating with. All in all this will be a net-positive improvement for
> the consuming client and the distributed system as a whole. Internally this
> will be huge win as it removes complexity and opens a path to removing even
> more complexity in idle connection timeouts, connection lifetime
> expiration, and load conditioning. Please see GEODE-6594 for more details.
> >
> > I realize this is sort of a grey area. An obvious questions is, “doesn’t
> this violate semver by removing a feature in a minor?” My answer is a
> solid, “nope!” The feature is defined as “reducing contention on the
> connection pool”, which is provided now by default in the refactored
> connection pool. All that is being done is deprecating and warning that a
> useless API is being removed in the next major release. The feature remains
> intact and client code does not need to be re-compiled or changed.
> >
> > I would love to hear if anyone has any concerns, questions or dissenting
> opinions about this proposal. PR 3394 has been opened for review of the
> code changes related to the removal. Please provide feedback in the PR
> relating only to the code and discussions about the merits of he proposal
> here in this email thread.
> >
> > Thanks,
> > Jake
> >
> > GEODE-6515 - https://issues.apache.org/jira/browse/GEODE-6515 <
> https://issues.apache.org/jira/browse/GEODE-6515>
> > GEODE-6594 - https://issues.apache.org/jira/browse/GEODE-6594 <
> https://issues.apache.org/jira/browse/GEODE-6594>
> > PR 3394 - https://github.com/apache/geode/pull/3394 <
> https://github.com/apache/geode/pull/3394>
> >
>
>

-- 
-John
john.blum10101 (skype)

Reply via email to