Hi,

I had written to the community.
Seems like they already have a more versatile fix.
The fix has already been merged to `next`.
I am providing the reply I received.

https://lore.kernel.org/git/aaefbu-mmy_73...@pks.im/

Thanks,
Pranav

________________________________________
From: Pranav P <pranav...@ibm.com>
Sent: Tuesday, April 22, 2025 4:30 PM
To: Bastian Blank; debian-s390; 1102...@bugs.debian.org; elbrus; elbrus
Subject: Re: [EXTERNAL] Re: git ftbfs on s390x (test failures)

Hi,

I was able to narrow down the problem and I do have an idea for a fix.

diff --git a/builtin/backfill.c b/builtin/backfill.c
index 33e1ea2f84..18f9701487 100644
--- a/builtin/backfill.c
+++ b/builtin/backfill.c
@@ -123,7 +123,7 @@ int cmd_backfill(int argc, const char **argv, const char 
*prefix, struct reposit
                .sparse = 0,
        };
        struct option options[] = {
-               OPT_INTEGER(0, "min-batch-size", &ctx.min_batch_size,
+               OPT_MAGNITUDE(0, "min-batch-size", &ctx.min_batch_size,
                            N_("Minimum number of objects to request at a 
time")),
                OPT_BOOL(0, "sparse", &ctx.sparse,
                         N_("Restrict the missing objects to the current 
sparse-checkout")),


This is passing all of the test cases. But with this change a size_t pointer is 
being cast to an unsigned long pointer.
This could potentially lead to some issue in some architecture.
If I can verify that the min_batch_size variable (which is a size_t var) is not 
taking a value larger than an int could store, then we can change its data type 
from size_t to int. So, I was checking if there is any evidence that indicates 
a potential range for min_batch_size.
I was also planning to ping with the upstream community for clarification.

Thanks,
Pranav

Reply via email to