Package: dgit-infrastructure
Version: 14.7

Anton Gladky writes ("Re: git debpush only with debian folder in repo"):
> Maybe a little bit of background information.

Thanks!  I want to understand what happened, so we can make sure the
software is working correctly.

> boost repo had a long time only debian folder inside. And tag2upload
> did not work with that even with the baredebian option.

I think baredebian+git was needed, and that means the upstream branch
would have to be in the Salsa repo.  I vaguely remmeber a previous
conversation about this.  Anyway:

> I decided to refresh the upstream branch with the latest release and
> merged it into the master branch. First upload attempt failed (just no
> reaction),

To be sure I am understanding you,

I think byou're saying you merged the upstream branch, therefore
changing the git branch format, and then you did a git-debpush of
1.90.0-4.  Presumably you used --quilt=gbp, then.  (That's what I see
in the tag for job 2871, the successful upload.)

You say this produced no reaction.  When was that, do you know?

I grepped the service logs and found three similar messages recently
  2026-02-19T05:42:19.043603Z
  2026-02-19T05:47:38.631668Z
  2026-02-19T06:00:40.463935Z

in each case a pair of messages:

  DEBUG tag2upload_service_manager::webhook: rejected early: 400 Bad
  Request, misconfigured (or malfunctioning) web hook: body parsing
  failed (413 Payload Too Large): i/o error: data limit exceeded

  WARN rocket::data::data_stream::_: Data limit reached while reading
  the request body.

> I deleted the tag and recreated it again.

Does that mean (a) you created an entirely fresh tag, by deleting the
tag everywhere and running git-debpush again, or (b) you deleted the
existing tag from Salsa and re-pushed *the same tag* (with "git push"
or equivalent)?

Note that git-debpush will typically re-push an existing tag if the
tag exists locally but not on Salsa.

And, when was this?

> Maybe the diff was too large between 1.90.0-3 and 1.90.0-4 because I
> merged the upstream branch between those versions?

This is a a good theory.

I looked at the test tag object we have for unit tests in the t2usm
repository, because that is a reformatted version of an actual real
webhook payload.  It contains this:

  "commits": [
    {
      "id": "b8e6d37e61b2eebc030752f0f4962960d87c1af4",
      "message": "Finalise 1.40\n\nSigned-off-by: Ian Jackson 
<[email protected]>\n",
      "title": "Finalise 1.40",
      "timestamp": "2024-09-07T12:36:43+01:00",
      "url": 
"https://salsa.debian.org/dgit-team/dgit-test-dummy/-/commit/b8e6d37e61b2eebc030752f0f4962960d87c1af4";,
      "author": {
        "name": "Ian Jackson",
        "email": "[REDACTED]"
      },
      "added": [
 
      ],
      "modified": [
        "debian/changelog"
      ],
      "removed": [
 
      ]
    }
  ],

I think that means it includes a list of all the changed files.
I cloned your repo and did this

 $ git show --stat 40b44e9ce77ce442c593a316e36d8edbe6f1648c |wc -c
 3422018

I think therefore that it's entirely plausible that this payload was
simply too large.  (Obviously that is not your fault.)


I think that if you had waited, the polling scraper would have found
your tag and processed it, as it eventually did after you re-pushed.

But I also think we should probably increase the data size limit.
The webhook receiver checks the calling IP address *before* attempting
to receive the payload.  So if gitlab wants to send us megabytes of
useless file listings, we could just swallow them.

This kind of very large transfer isn't common.  It looks like there
were seven since the start of February.  That would include non-t2u
tags which wouldn't show up as lost webhooks in the forge page.  So
it's plausible that this was the first time this has happened to a
real tag2upload job.

Ian.

Reply via email to