Using a classic read/write exclusive lock, we would only every contend
on read+write or write+write, which are /rare/.

It's really, really nice when we can have dead-simple threadsafe APIs,
instead of requiring people to jump through hoops or roll their own
dispatch code. (fragile) IIRC most new APIs added to the web are
supposed to be available in Workers, so the need for reading prefs
off-main-thread is only set to grow.

I don't see how this can mutate into a foot-gun in ways that aren't
already the case today without off-main-thread access.

Anyway, I appreciate the work that's been done and is ongoing here. As
you burn down the pref accesses in start-up, please consider
unblocking this feature request. (Personally I'd just eat the 400us in
exchange for this simplifying architectural win)

On Thu, Jul 19, 2018 at 2:19 PM, Kris Maglione <kmagli...@mozilla.com> wrote:
> On Tue, Jul 17, 2018 at 03:49:41PM -0700, Jeff Gilbert wrote:
>>
>> We should totally be able to afford the very low cost of a
>> rarely-contended lock. What's going on that causes uncached pref reads
>> to show up so hot in profiles? Do we have a list of problematic pref
>> keys?
>
>
> So, at the moment, we read about 10,000 preferences at startup in debug
> builds. That number is probably slightly lower in non-debug builds, bug we
> don't collect stats there. We're working on reducing that number (which is
> why we collect statistics in the first place), but for now, it's still quite
> high.
>
>
> As for the cost of locks... On my machine, in a tight loop, the cost of a
> entering and exiting MutexAutoLock is about 37ns. This is pretty close to
> ideal circumstances, on a single core of a very fast CPU, with very fast
> RAM, everything cached, and no contention. If we could extrapolate that to
> normal usage, it would be about a third of a ms of additional overhead for
> startup. I've fought hard enough for 1ms startup time improvements, but
> *shrug*, if it were that simple, it might be acceptable.
>
> But I have no reason to think the lock would be rarely contended. We read
> preferences *a lot*, and if we allowed access from background threads, I
> have no doubt that we would start reading them a lot from background threads
> in addition to reading them a lot from the main thread.
>
> And that would mean, in addition to lock contention, cache contention and
> potentially even NUMA issues. Those last two apply to atomic var caches too,
> but at least they generally apply only to the specific var caches being
> accessed off-thread, rather than pref look-ups in general.
>
>
> Maybe we could get away with it at first, as long as off-thread usage
> remains low. But long term, I think it would be a performance foot-gun. And,
> paradoxically, the less foot-gunny it is, the less useful it probably is,
> too. If we're only using it off-thread in a few places, and don't have to
> worry about contention, why are we bothering with locking and off-thread
> access in the first place?
>
>
>> On Tue, Jul 17, 2018 at 8:57 AM, Kris Maglione <kmagli...@mozilla.com>
>> wrote:
>>>
>>> On Tue, Jul 17, 2018 at 02:06:48PM +0100, Jonathan Kew wrote:
>>>>
>>>>
>>>> On 13/07/2018 21:37, Kris Maglione wrote:
>>>>>
>>>>>
>>>>> tl;dr: A major change to the architecture preference service has just
>>>>> landed, so please be on the lookout for regressions.
>>>>>
>>>>> We've been working for the last few weeks on rearchitecting the
>>>>> preference service to work better in our current and future
>>>>> multi-process
>>>>> configurations, and those changes have just landed in bug 1471025.
>>>>
>>>>
>>>>
>>>> Looks like a great step forward!
>>>>
>>>> While we're thinking about the prefs service, is there any possibility
>>>> we
>>>> could enable off-main-thread access to preferences?
>>>
>>>
>>>
>>> I think the chances of that are pretty close to 0, but I'll defer to
>>> Nick.
>>>
>>> We definitely can't afford the locking overhead—preference look-ups
>>> already
>>> show up in profiles without it. And even the current limited exception
>>> that
>>> we grant Stylo while it has the main thread blocked causes problems (bug
>>> 1474789), since it makes it impossible to update statistics for those
>>> reads,
>>> or switch to Robin Hood hashing (which would make our hash tables much
>>> smaller and more efficient, but requires read operations to be able to
>>> move
>>> entries).
>>>
>>>> I am aware that in simple cases, this can be achieved via the
>>>> StaticPrefsList; by defining a VARCACHE_PREF there, I can read its value
>>>> from other threads. But this doesn't help in my use case, where I need
>>>> another thread to be able to query an extensible set of pref names that
>>>> are
>>>> not fully known at compile time.
>>>>
>>>> Currently, it looks like to do this, I'll have to iterate over the
>>>> relevant prefs branch(es) ahead of time (on the main thread) and copy
>>>> all
>>>> the entries to some other place that is then available to my worker
>>>> threads.
>>>> For my use case, at least, the other threads only need read access;
>>>> modifying prefs could still be limited to the main thread.
>>>
>>>
>>>
>>> That's probably your best option, yeah. Although I will say that those
>>> kinds
>>> of extensible preference sets aren't great for performance or memory
>>> usage,
>>> so switching to some other model might be better.
>>>
>>>> Possible? Or would the overhead of locking be too crippling?
>>>
>>>
>>>
>>> The latter, I'm afraid.
>>>
>>> _______________________________________________
>>> dev-platform mailing list
>>> dev-platform@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-platform
>
>
> --
> Kris Maglione
> Senior Firefox Add-ons Engineer
> Mozilla Corporation
>
> On two occasions I have been asked, "Pray, Mr. Babbage, if you put
> into the machine wrong figures, will the right answers come out?" I am
> not able rightly to apprehend the kind of confusion of ideas that
> could provoke such a question.
>         --Charles Babbage
>
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to