That is similar to sync.Map works, but it involves much more complex code.

More importantly though, if multiple entries need to be keep in sync that 
technique doesn’t work - at least not directly/easily. This is a common need 
with associated caches.

Even copy on write isn’t always suitable. Assume you have a map (cache) that is 
1GB in size. It is mainly read. But you need to update an entry every once in a 
while.

With copy on write, the “create a new value” - needs to create a new map and 
copy over the existing map - very expensive. Then atomically replace the 
reference.

With multiple writers this can be even more expensive, since you need a 
secondary lock to avoid each writer attempting to make an expensive copy then 
failing the CAS. (no more expensive than the write lock in RWMutex).


> On Feb 4, 2023, at 6:14 PM, Ian Lance Taylor <[email protected] 
> <mailto:[email protected]>> wrote:
> 
> On Sat, Feb 4, 2023 at 3:24 PM Robert Engels <[email protected] 
> <mailto:[email protected]>> wrote:
>> 
>> That only works if what it is pointing to is cheap to copy. If it is a large 
>> multi-layer structure a RW lock is usually more efficient.
> 
> No significant copying is required, you just get a pointer to the
> value.  Then you have some way to determine whether it is up to date.
> If not, you create a new value and store a pointer to it back in the
> atomic.Pointer.
> 
> Ian
> 
> 
>>> On Feb 4, 2023, at 5:19 PM, Ian Lance Taylor <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>> 
>>> On Sat, Feb 4, 2023 at 3:11 PM Robert Engels <[email protected] 
>>> <mailto:[email protected]>> wrote:
>>>> 
>>>> I think with server processes - with possibly 100k+ connections - the 
>>>> contention on a “read mainly” cache is more than you think. This test only 
>>>> uses 500 readers with little work to simulate the 100k case.
>>> 
>>> Not to get too far into the weeds, but if I were expecting that kind
>>> of load I would use an atomic.Pointer anyhow, rather than any sort of
>>> mutex.
>>> 
>>> Ian
>>> 
>>>>>> On Feb 4, 2023, at 4:59 PM, Ian Lance Taylor <[email protected] 
>>>>>> <mailto:[email protected]>> wrote:
>>>>> 
>>>>> On Sat, Feb 4, 2023 at 8:49 AM robert engels <[email protected] 
>>>>> <mailto:[email protected]>> wrote:
>>>>>> 
>>>>>> I took some time to put this to a test. The Go program here 
>>>>>> https://go.dev/play/p/378Zn_ZQNaz <https://go.dev/play/p/378Zn_ZQNaz> 
>>>>>> uses a VERY short holding of the lock - but a large % of runtime holding 
>>>>>> the lock.
>>>>>> 
>>>>>> (You can’t run it on the Playground because of the length of time). You 
>>>>>> can comment/uncomment the lines 28-31 to test the different mutexes,
>>>>>> 
>>>>>> It simulates a common system scenario (most web services) - lots of 
>>>>>> readers of the cache, but the cache is updated infrequently.
>>>>>> 
>>>>>> On my machine the RWMutex is > 50% faster - taking 22 seconds vs 47 
>>>>>> seconds using a simple Mutex.
>>>>>> 
>>>>>> It is easy to understand why - you get no parallelization of the readers 
>>>>>> when using a simple Mutex.
>>>>> 
>>>>> Thanks for the benchmark.  You're right: if you have hundreds of
>>>>> goroutines doing nothing but acquiring a read lock, then an RWMutex
>>>>> can be faster.  They key there is that there are always multiple
>>>>> goroutines waiting for the lock.
>>>>> 
>>>>> I still stand by my statement for more common use cases.
>>>>> 
>>>>> Ian
>>>>> 
>>>>> 
>>>>>> On Jan 30, 2023, at 8:29 PM, Ian Lance Taylor <[email protected] 
>>>>>> <mailto:[email protected]>> wrote:
>>>>>> 
>>>>>> On Mon, Jan 30, 2023 at 4:42 PM Robert Engels <[email protected] 
>>>>>> <mailto:[email protected]>> wrote:
>>>>>> 
>>>>>> 
>>>>>> Yes but only for a single reader - any concurrent reader is going to 
>>>>>> park/deschedule.
>>>>>> 
>>>>>> 
>>>>>> If we are talking specifically about Go, then it's more complex than
>>>>>> that.  In particular, the code will spin briefly trying to acquire the
>>>>>> mutex, before queuing.
>>>>>> 
>>>>>> There’s a reason RW locks exist - and I think it is pretty common - but 
>>>>>> agree to disagree :)
>>>>>> 
>>>>>> 
>>>>>> Sure: read-write locks are fine and appropriate when the program holds
>>>>>> the read lock for a reasonably lengthy time.  As I said, my analysis
>>>>>> only applies when code holds the read lock briefly, as is often the
>>>>>> case for a cache.
>>>>>> 
>>>>>> Ian
>>>>>> 
>>>>>> 
>>>>>> On Jan 30, 2023, at 6:23 PM, Ian Lance Taylor <[email protected] 
>>>>>> <mailto:[email protected]>> wrote:
>>>>>> 
>>>>>> On Mon, Jan 30, 2023 at 1:00 PM Robert Engels <[email protected] 
>>>>>> <mailto:[email protected]>> wrote:
>>>>>> 
>>>>>> 
>>>>>> Pure readers do not need any mutex on the fast path. It is an atomic CAS 
>>>>>> - which is faster than a mutex as it allows concurrent readers. On the 
>>>>>> slow path - fairness with a waiting or active writer - it degenerates in 
>>>>>> performance to a simple mutex.
>>>>>> 
>>>>>> The issue with a mutex is that you need to acquire it whether reading or 
>>>>>> writing - this is slow…. (at least compared to an atomic cas)
>>>>>> 
>>>>>> 
>>>>>> The fast path of a mutex is also an atomic CAS.
>>>>>> 
>>>>>> Ian
>>>>>> 
>>>>>> On Jan 30, 2023, at 2:24 PM, Ian Lance Taylor <[email protected] 
>>>>>> <mailto:[email protected]>> wrote:
>>>>>> 
>>>>>> 
>>>>>> On Mon, Jan 30, 2023 at 11:26 AM Robert Engels <[email protected] 
>>>>>> <mailto:[email protected]>> wrote:
>>>>>> 
>>>>>> 
>>>>>> I don’t think that is true. A RW lock is always better when the reader 
>>>>>> activity is far greater than the writer - simply because in a good 
>>>>>> implementation the read lock can be acquired without blocking/scheduling 
>>>>>> activity.
>>>>>> 
>>>>>> 
>>>>>> The best read lock implementation is not going to be better than the
>>>>>> best plain mutex implementation.  And with current technology any
>>>>>> implementation is going to require atomic memory operations which
>>>>>> require coordinating cache lines between CPUs.  If your reader
>>>>>> activity is so large that you get significant contention on a plain
>>>>>> mutex (recalling that we are assuming the case where the operations
>>>>>> under the read lock are quick) then you are also going to get
>>>>>> significant contention on a read lock.  The effect is that the read
>>>>>> lock isn't going to be faster anyhow in practice, and your program
>>>>>> should probably be using a different approach.
>>>>>> 
>>>>>> Ian
>>>>>> 
>>>>>> On Jan 30, 2023, at 12:49 PM, Ian Lance Taylor <[email protected] 
>>>>>> <mailto:[email protected]>> wrote:
>>>>>> 
>>>>>> 
>>>>>> On Sun, Jan 29, 2023 at 6:34 PM Diego Augusto Molina
>>>>>> <[email protected] <mailto:[email protected]>> 
>>>>>> wrote:
>>>>>> 
>>>>>> 
>>>>>> From times to times I write a scraper or some other tool that would 
>>>>>> authenticate to a service and then use the auth result to do stuff 
>>>>>> concurrently. But when auth expires, I need to synchronize all my 
>>>>>> goroutines and have a single one do the re-auth process, check the 
>>>>>> status, etc. and then arrange for all goroutines to go back to work 
>>>>>> using the new auth result.
>>>>>> 
>>>>>> To generalize the problem: multiple goroutines read a cached value that 
>>>>>> expires at some point. When it does, they all should block and some I/O 
>>>>>> operation has to be performed by a single goroutine to renew the cached 
>>>>>> value, then unblock all other goroutines and have them use the new value.
>>>>>> 
>>>>>> I solved this in the past in a number of ways: having a single goroutine 
>>>>>> that handles the cache by asking it for the value through a channel, 
>>>>>> using sync.Cond (which btw every time I decide to use I need to 
>>>>>> carefully re-read its docs and do lots of tests because I never get it 
>>>>>> right at first). But what I came to do lately is to implement an 
>>>>>> upgradable lock and have every goroutine do:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> We have historically rejected this kind of adjustable lock.  There is
>>>>>> some previous discussion at https://go.dev/issue/4026 
>>>>>> <https://go.dev/issue/4026>,
>>>>>> https://go.dev/issue/23513 <https://go.dev/issue/23513>, 
>>>>>> https://go.dev/issue/38891 <https://go.dev/issue/38891>,
>>>>>> https://go.dev/issue/44049 <https://go.dev/issue/44049>.
>>>>>> 
>>>>>> For a cache where checking that the cached value is valid (not stale)
>>>>>> and fetching the cached value is quick, then in general you will be
>>>>>> better off using a plain Mutex rather than RWMutex.  RWMutex is more
>>>>>> complicated and therefore slower.  It's only useful to use an RWMutex
>>>>>> when the read case is both contested and relatively slow.  If the read
>>>>>> case is fast then the simpler Mutex will tend to be faster.  And then
>>>>>> you don't have to worry about upgrading the lock.
>>>>>> 
>>>>>> Ian
>>>>>> 
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "golang-nuts" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>>> an email to [email protected] 
>>>>>> <mailto:[email protected]>.
>>>>>> To view this discussion on the web visit 
>>>>>> https://groups.google.com/d/msgid/golang-nuts/CAOyqgcXNVFkc5H-L6K4Mt81gB6u91Ja07hob%3DS8Qwgy2buiQjQ%40mail.gmail.com
>>>>>>  
>>>>>> <https://groups.google.com/d/msgid/golang-nuts/CAOyqgcXNVFkc5H-L6K4Mt81gB6u91Ja07hob%3DS8Qwgy2buiQjQ%40mail.gmail.com>.
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "golang-nuts" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>>> an email to [email protected] 
>>>>>> <mailto:[email protected]>.
>>>>>> To view this discussion on the web visit 
>>>>>> https://groups.google.com/d/msgid/golang-nuts/CAOyqgcWJ%2BLPOoTk9H7bxAj8_dLsuhgOpy_bZZrGW%3D%2Bz6N%3DrX-w%40mail.gmail.com
>>>>>>  
>>>>>> <https://groups.google.com/d/msgid/golang-nuts/CAOyqgcWJ%2BLPOoTk9H7bxAj8_dLsuhgOpy_bZZrGW%3D%2Bz6N%3DrX-w%40mail.gmail.com>.
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> You received this message because you are subscribed to the Google 
>>>>>> Groups "golang-nuts" group.
>>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>>> an email to [email protected] 
>>>>>> <mailto:[email protected]>.
>>>>>> To view this discussion on the web visit 
>>>>>> https://groups.google.com/d/msgid/golang-nuts/CAOyqgcVLzkTgiYqw%2BWh6pTFX74X-LYoyPFK5bkX7T8J8j3mb%3Dg%40mail.gmail.com
>>>>>>  
>>>>>> <https://groups.google.com/d/msgid/golang-nuts/CAOyqgcVLzkTgiYqw%2BWh6pTFX74X-LYoyPFK5bkX7T8J8j3mb%3Dg%40mail.gmail.com>.
>>>>>> 
>>>>>> 
>>>>> 
>>>>> --
>>>>> You received this message because you are subscribed to the Google Groups 
>>>>> "golang-nuts" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>>>> email to [email protected] 
>>>>> <mailto:[email protected]>.
>>>>> To view this discussion on the web visit 
>>>>> https://groups.google.com/d/msgid/golang-nuts/CAOyqgcV-7RfjXakYkc-pVJHPwhkaTLXky0mOMXbhqpcXLGwp2Q%40mail.gmail.com
>>>>>  
>>>>> <https://groups.google.com/d/msgid/golang-nuts/CAOyqgcV-7RfjXakYkc-pVJHPwhkaTLXky0mOMXbhqpcXLGwp2Q%40mail.gmail.com>.
>>> 
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an 
>>> email to [email protected] 
>>> <mailto:[email protected]>.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/golang-nuts/CAOyqgcVgOfcSr%2BvzTKGMpicw1hbD6bzrB5yZhOn-sYGW81b6tw%40mail.gmail.com
>>>  
>>> <https://groups.google.com/d/msgid/golang-nuts/CAOyqgcVgOfcSr%2BvzTKGMpicw1hbD6bzrB5yZhOn-sYGW81b6tw%40mail.gmail.com>.
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/CAOyqgcW4xjXyafoFHj5cc86htoRF1k%2BtPMwdhYQ_nJjUyz%3DSNw%40mail.gmail.com
>  
> <https://groups.google.com/d/msgid/golang-nuts/CAOyqgcW4xjXyafoFHj5cc86htoRF1k%2BtPMwdhYQ_nJjUyz%3DSNw%40mail.gmail.com>.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/8822705F-9254-4FB6-AFFA-D47514897048%40ix.netcom.com.

Reply via email to