If we find anything interesting we definitely will share it, the initial
trial that just tested the per-profile limit changes (
https://groups.google.com/a/chromium.org/g/blink-dev/c/1r-i4Koc5nM/) didn't
turn up any statistically significant performance impacts one way or the
other. We expect this is due to the pool not being frequently saturated.

~ Ari Chivukula (Their/There/They're)

On Mon, Sep 15, 2025, 13:56 Alex Russell <[email protected]> wrote:

> Exciting experiment! LGTM.
>
> Do you have plans to publish your results?
>
> On Monday, September 15, 2025 at 9:45:35 AM UTC-7 Ari Chivukula wrote:
>
>> Contact emails
>>
>> [email protected], [email protected], [email protected],
>> [email protected]
>> Explainer/Specification
>>
>> None
>>
>> Summary
>>
>> This experiment evaluates the impact of changing the per-profile TCP
>> socket pool size from 256 (the current default
>> <https://source.chromium.org/chromium/chromium/src/+/main:net/socket/client_socket_pool_manager.cc;drc=8b81608b6457dfef865f46e509c79dc60fe3c69b;l=35>)
>> to 513 while adding a per-top-level-site cap of 256 (to ensure no two tabs
>> can exhaust the pool). The feasibility of raising the per-profile limit
>> to 512 was already studied
>> <https://groups.google.com/a/chromium.org/g/blink-dev/c/1r-i4Koc5nM?e=48417069>
>> and did not yield negative results, and the per-top-level-site cap of 256
>> is equal to the current per-profile limit so should not cause negative
>> impact. These new limits will be imposed independently for the WebSocket
>> pool and the normal (HTTP) socket pool.
>>
>>
>> The intent is to roll this experiment directly into a full launch if no
>> ill effects are seen. See the motivation section for more.
>>
>>
>> Blink component
>>
>> Blink>Network
>> <https://issues.chromium.org/issues?q=customfield1222907:%22Blink%3ENetwork%22>
>>
>> TAG review
>>
>> https://github.com/w3ctag/design-reviews/issues/1151
>>
>>
>> Motivation
>>
>> Having a fixed pool of TCP sockets available to an entire profile allows
>> attackers to effectively divinate the amount of network requests done by
>> other tabs, and learn things about them to the extent that any given site
>> can be profiled. For example, if a site does X network requests if it’s
>> logged in and Y if it’s logged out, by saturating the TCP socket pool and
>> watching movement after calling window.open, the state of the other site
>> can be gleaned. This sort of attack is outlined in more detail here:
>> https://xsleaks.dev/docs/attacks/timing-attacks/connection-pool/
>>
>> In order to address this sort of attack, we will cap the max sockets
>> per-top-level-site while raising the per-profile limit. That means no
>> single tab can max out the socket pool on its own. While this mitigation
>> does not fully block the attack (it could still be performed by
>> orchestrating three attacking tabs on different sites) it raises the
>> difficulty by preventing it from being performed by just one tab.
>> Widespread adoption of this attack is already made difficult as multiple
>> attackers all acting at once would step on each other and prevent pool
>> monopolization.
>>
>> Risks
>>
>> Interoperability and Compatibility
>>
>> While other user agents may wish to follow the results, we only
>> anticipate compatibility issues with local machines or remote servers
>> when the amount of available TCP sockets in the browser fluctuates up (256
>> -> 513) in a way Chrome did not allow before. This will be monitored
>> carefully, and any experiment yielding significant negative impact on
>> browsing experience will be terminated early.
>>
>> Gecko: https://github.com/mozilla/standards-positions/issues/1299;
>> current global cap of 128-900
>> <https://github.com/mozilla-firefox/firefox/blob/4bd4e4c595499ee51c2e6f4c9f780fe720f454e8/modules/libpref/init/all.js#L1138>
>> (as allowed by OS)
>>
>> WebKit: https://github.com/WebKit/standards-positions/issues/550;
>> current global cap of 256
>> <https://github.com/WebKit/WebKit/blob/d323b2fc4cd2686c828bd8976fae6ec2d2b6311c/Source/WebCore/platform/network/soup/SoupNetworkSession.cpp#L104>
>>
>> Debuggability
>>
>> This will be gated behind the base::feature
>> kTcpConnectionPoolSizePerTopLevelSiteTrial, so if breakage is suspected
>> that flag could be turned off to detect impact. For how to control feature
>> flags, see this
>> <https://source.chromium.org/chromium/chromium/src/+/main:base/feature_list.h;drc=159a65729cf8fca4d9f453d12d97ab6515360491;l=259>
>> .
>>
>> Measurement
>>
>> A new net log event type
>> SOCKET_POOL_STALLED_MAX_SOCKETS_PER_TOP_LEVEL_SITE will be added to track
>> when we hit this new limit as opposed to the existing
>> SOCKET_POOL_STALLED_MAX_SOCKETS event.
>>
>> An existing metric Net.TcpConnectAttempt.Latency.{Result} will be used to
>> detect increases in overall connection failure rates.
>>
>> Will this feature be supported on all six Blink platforms (Windows, Mac,
>> Linux, ChromeOS, Android, and Android WebView)?
>>
>> No, not WebView. That will have to be studied independently due to the
>> differing constraints.
>>
>> Is this feature fully tested by web-platform-tests?
>>
>> No, as this is a blink networking focused change browser tests or unit
>> tests are more likely.
>>
>> Flag name on about://flags
>>
>> None
>>
>> Finch feature name
>>
>> TcpConnectionPoolSizePerTopLevelSiteTrial
>>
>> Rollout plan
>>
>> We will never test more than 5% in each group on stable, and will stay on
>> canary/dev/beta for a while to detect issues before testing stable.
>>
>> Requires code in //chrome?
>>
>> No
>>
>> Tracking bug
>>
>> https://crbug.com/415691664
>>
>> Estimated milestones
>>
>> 142
>>
>> Link to entry on the Chrome Platform Status
>>
>> https://chromestatus.com/feature/6496757559197696
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAGpy5D%2B6utXp0tc0Ra8eCxww_Gjw7fb1EDjbNEzjuHbeWORr0Q%40mail.gmail.com.

Reply via email to