This is an automated email from the ASF dual-hosted git repository.

vatamane pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/couchdb.git

commit 5b2db4fa634743067bd733ab1edcba2fa5f297f3
Author: Nick Vatamaniuc <[email protected]>
AuthorDate: Wed May 28 16:11:41 2025 -0400

    Don't spawn more than one init_delete_dir instance
    
    These are unsupervised processes and we spawn one for each scheduler. So on 
an
    80 CPU server we'd spawn 80 of them, all concurrently traversing the same
    directory tree and deleting the same files. Some end up crashing with
    `{error,enoent}` if the file was deleted by another cleaner.
    
    Instead let's spawn just one of them from the first couch_server instance.
---
 src/couch/src/couch_server.erl | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/src/couch/src/couch_server.erl b/src/couch/src/couch_server.erl
index ca12a56fa..aee2d9904 100644
--- a/src/couch/src/couch_server.erl
+++ b/src/couch/src/couch_server.erl
@@ -303,7 +303,12 @@ init([N]) ->
         "couchdb", "update_lru_on_read", false
     ),
     ok = config:listen_for_changes(?MODULE, N),
-    ok = couch_file:init_delete_dir(RootDir),
+    % Spawn async .deleted files recursive cleaner, but only
+    % for the first sharded couch_server instance
+    case N of
+        1 -> ok = couch_file:init_delete_dir(RootDir);
+        _ -> ok
+    end,
     ets:new(couch_dbs(N), [
         set,
         protected,

Reply via email to