Source: starlette
Version: 0.46.1-2
Severity: medium
Tags: patch security
Justification: CVE-2025-54121
Followup-For: Bug #1109805
Control: tags -1 patch

Dear Maintainer,

This is a **Non-maintainer upload** to address **CVE-2025-54121** in 
python3-starlette.

### Vulnerability Summary (CVE-2025-54121)

In Starlette versions <= 0.47.1, when uploading large files using `UploadFile`, 
the `SpooledTemporaryFile` rollover to disk occurs *synchronously* on the event 
loop thread. This leads to long pauses (several seconds) in async processing 
under concurrent uploads, resulting in **blocked event loop** behavior and 
degraded server responsiveness.

### Upstream Fix

Upstream commit:  
https://github.com/encode/starlette/commit/9f7ec2eb512fcc3fe90b43cb9dd9e1d08696bec1
  
Author: Ethan <etha...@users.noreply.github.com>  
Co-authored-by: Marcelo Trylesinski <marcelotr...@gmail.com>  
Merged for version: **0.47.2**

### Patch Details

The patch:

This patch is backported cleanly to 0.46.1-2 and verified to prevent event loop 
delays during concurrent uploads in both synthetic and real-world test cases.

### Debian Changes

- Bump version to `0.46.1-3`.
- Add a patch under `debian/patches` with DEP-3 headers referencing the CVE and 
upstream fix.
- Verified with `debuild -us -uc` and tested in a Trixie container.
- No regression observed in runtime or functionality.

### Test Result Summary

- Patched version prevents event loop blocking (`Δ=1s` maintained during 10-700 
concurrent uploads).
- Matches behavior of upstream 0.47.2.

Please consider applying this patch or uploading an updated version with the 
upstream fix.

Best regards,  
Young Wang

-- System Information:
Debian Release: 13.0
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: amd64 (x86_64)

Kernel: Linux 5.15.0-138-generic (SMP w/88 CPU threads)
Locale: LANG=C, LC_CTYPE=C (charmap=ANSI_X3.4-1968) (ignored: LC_ALL set to C), 
LANGUAGE not set
Shell: /bin/sh linked to /usr/bin/dash
Init: unable to detect
diff -Nru starlette-0.46.1/debian/changelog starlette-0.46.1/debian/changelog
--- starlette-0.46.1/debian/changelog   2025-03-11 21:20:00.000000000 +0000
+++ starlette-0.46.1/debian/changelog   2025-07-25 15:20:00.000000000 +0000
@@ -1,3 +1,12 @@
+starlette (0.46.1-3) unstable; urgency=medium
+
+  * Non-maintainer upload.
+  * Fix CVE-2025-54121: Avoid event loop blocking during multipart file uploads
+    by writing to disk using thread pool to prevent synchronous blocking when
+    SpooledTemporaryFile rolls over to disk. (Closes: #1109805)
+
+ -- Yang Wang <yang.w...@windriver.com>  Fri, 25 Jul 2025 11:20:00 -0400
+
 starlette (0.46.1-2) unstable; urgency=medium
 
   * Team upload.
diff -Nru starlette-0.46.1/debian/patches/fix-cve-2024-28849-async-write.patch 
starlette-0.46.1/debian/patches/fix-cve-2024-28849-async-write.patch
--- starlette-0.46.1/debian/patches/fix-cve-2024-28849-async-write.patch        
1970-01-01 00:00:00.000000000 +0000
+++ starlette-0.46.1/debian/patches/fix-cve-2024-28849-async-write.patch        
2025-07-25 15:20:00.000000000 +0000
@@ -0,0 +1,140 @@
+Index: starlette-0.46.1/starlette/datastructures.py
+===================================================================
+--- starlette-0.46.1.orig/starlette/datastructures.py
++++ starlette-0.46.1/starlette/datastructures.py
+@@ -424,6 +424,10 @@ class UploadFile:
+         self.size = size
+         self.headers = headers or Headers()
+ 
++        # Capture max size from SpooledTemporaryFile if one is provided. This 
slightly speeds up future checks.
++        # Note 0 means unlimited mirroring SpooledTemporaryFile's __init__
++        self._max_mem_size = getattr(self.file, "_max_size", 0)
++
+     @property
+     def content_type(self) -> str | None:
+         return self.headers.get("content-type", None)
+@@ -434,14 +438,24 @@ class UploadFile:
+         rolled_to_disk = getattr(self.file, "_rolled", True)
+         return not rolled_to_disk
+ 
++    def _will_roll(self, size_to_add: int) -> bool:
++        # If we're not in_memory then we will always roll
++        if not self._in_memory:
++            return True
++
++        # Check for SpooledTemporaryFile._max_size
++        future_size = self.file.tell() + size_to_add
++        return bool(future_size > self._max_mem_size) if self._max_mem_size 
else False
++
+     async def write(self, data: bytes) -> None:
++        new_data_len = len(data)
+         if self.size is not None:
+-            self.size += len(data)
++            self.size += new_data_len
+ 
+-        if self._in_memory:
+-            self.file.write(data)
+-        else:
++        if self._will_roll(new_data_len):
+             await run_in_threadpool(self.file.write, data)
++        else:
++            self.file.write(data)
+ 
+     async def read(self, size: int = -1) -> bytes:
+         if self._in_memory:
+Index: starlette-0.46.1/tests/test_formparsers.py
+===================================================================
+--- starlette-0.46.1.orig/tests/test_formparsers.py
++++ starlette-0.46.1/tests/test_formparsers.py
+@@ -1,15 +1,20 @@
+ from __future__ import annotations
+ 
+ import os
++import threading
++from collections.abc import Generator
+ import typing
+ from contextlib import nullcontext as does_not_raise
++from io import BytesIO
+ from pathlib import Path
++from tempfile import SpooledTemporaryFile
++from unittest import mock
+ 
+ import pytest
+ 
+ from starlette.applications import Starlette
+ from starlette.datastructures import UploadFile
+-from starlette.formparsers import MultiPartException, _user_safe_decode
++from starlette.formparsers import MultiPartException, MultiPartParser, 
_user_safe_decode
+ from starlette.requests import Request
+ from starlette.responses import JSONResponse
+ from starlette.routing import Mount
+@@ -104,6 +109,22 @@ async def app_read_body(scope: Scope, re
+     await response(scope, receive, send)
+ 
+ 
++async def app_monitor_thread(scope: Scope, receive: Receive, send: Send) -> 
None:
++    """Helper app to monitor what thread the app was called on.
++
++    This can later be used to validate thread/event loop operations.
++    """
++    request = Request(scope, receive)
++
++    # Make sure we parse the form
++    await request.form()
++    await request.close()
++
++    # Send back the current thread id
++    response = JSONResponse({"thread_ident": 
threading.current_thread().ident})
++    await response(scope, receive, send)
++
++
+ def make_app_max_parts(max_files: int = 1000, max_fields: int = 1000, 
max_part_size: int = 1024 * 1024) -> ASGIApp:
+     async def app(scope: Scope, receive: Receive, send: Send) -> None:
+         request = Request(scope, receive)
+@@ -302,6 +323,46 @@ def test_multipart_request_mixed_files_a
+         "field1": "value1",
+     }
+ 
++class ThreadTrackingSpooledTemporaryFile(SpooledTemporaryFile[bytes]):
++    """Helper class to track which threads performed the rollover operation.
++
++    This is not threadsafe/multi-test safe.
++    """
++
++    rollover_threads: typing.ClassVar[set[int | None]] = set()
++
++    def rollover(self) -> None:
++        
ThreadTrackingSpooledTemporaryFile.rollover_threads.add(threading.current_thread().ident)
++        super().rollover()
++
++
++@pytest.fixture
++def mock_spooled_temporary_file() -> Generator[None]:
++    try:
++        with mock.patch("starlette.formparsers.SpooledTemporaryFile", 
ThreadTrackingSpooledTemporaryFile):
++            yield
++    finally:
++        ThreadTrackingSpooledTemporaryFile.rollover_threads.clear()
++
++
++def test_multipart_request_large_file_rollover_in_background_thread(
++    mock_spooled_temporary_file: None, test_client_factory: TestClientFactory
++) -> None:
++    """Test that Spooled file rollovers happen in background threads."""
++    data = BytesIO(b" " * (MultiPartParser.spool_max_size + 1))
++
++    client = test_client_factory(app_monitor_thread)
++    response = client.post("/", files=[("test_large", data)])
++    assert response.status_code == 200
++
++    # Parse the event thread id from the API response and ensure we have one
++    app_thread_ident = response.json().get("thread_ident")
++    assert app_thread_ident is not None
++
++    # Ensure the app thread was not the same as the rollover one and that a 
rollover thread exists
++    assert app_thread_ident not in 
ThreadTrackingSpooledTemporaryFile.rollover_threads
++    assert len(ThreadTrackingSpooledTemporaryFile.rollover_threads) == 1
++
+ 
+ def test_multipart_request_with_charset_for_filename(tmpdir: Path, 
test_client_factory: TestClientFactory) -> None:
+     client = test_client_factory(app)
diff -Nru starlette-0.46.1/debian/patches/series 
starlette-0.46.1/debian/patches/series
--- starlette-0.46.1/debian/patches/series      2025-03-11 21:19:48.000000000 
+0000
+++ starlette-0.46.1/debian/patches/series      2025-07-25 15:20:00.000000000 
+0000
@@ -1 +1,2 @@
 json-format.patch
+fix-cve-2024-28849-async-write.patch

Reply via email to