On Apr 7, 2025, at 09:44, Mark Millard <mark...@yahoo.com> wrote:

> On Apr 7, 2025, at 08:14, Baptiste Daroussin <b...@freebsd.org> wrote:
> 
>> On Mon 07 Apr 08:07, Mark Millard wrote:
>>> . . .
>> 
>> Listing like this is clearly not any useful, the problem we have is the
>> performance changes depending on what is happening in parallel on the 
>> machines.
> 
> I've been exploring looking for an example that reproduces the
> timing issue via commands like:
> 
> # poudriere bulk -jrelease-aarch64  -v -p default -c www/gitlab@ee
> vs.
> # poudriere bulk -jrelease-aarch64  -v -p alt -c www/gitlab@ee
> 
> so that prior builds are not involved in creating such a context.
> Also, when www/gitlab@ee itself is building, no other builder will
> be active.
> 
> I've started such a build based on a pkg 2.0.6 /usr/ports/ context
> and will try one based on a pkg 2.1.0 /usr/ports-alt/ context.
> 
> I'm trying www/gitlab@ee because, on beefy17, it went from:
> 
> 00:09:01 (pre pkg 2.1.0 example)

I must have clicked on the wrong thing for the above. Looking again:

build of www/gitlab@ee | gitlab-ee-17.10.0 ended at Wed Mar 26 10:27:22 UTC 2025
build time: 00:13:50

> to:
> 05:35:01 (pkg 23.1.0 example)
> 
> (so somewhat over 37 times longer) and when I looked it up
> it has a huge number of dependencies:
> 
> # pkg rquery -U -r FreeBSD "%#d : %n %o" www/gitlab@ee
> 298 : gitlab-ee www/gitlab
> 
> The factor of 37 is large enough to be unlikely to have only
> load averages on beefy17 as a major contributor. Given the
> evidence about the count of dependencies, I will see. what
> I get.
> 
> The test environment is a Apple Silicon M4 MAX system with
> FreeBSD running under Parallels in macOS.
> 
> [00:00:07] Building 943 packages using up to 14 builders
> 
> 
> OOPS (via checking ampere2 logs):
> 
> Looks like aarch64 might end up blocked for a
> rubygem-redis-actionpack-rails70 "Phase: stage" failure. I
> may have to set up a amd64 context for such experiments.

The above looks to be another example of looking at
that wrong thing.

But, as I'm using 2 different vintages of ports tree,
there could be an issue for 1 even if there is not for
the other.

>> which makes the performance issues invisible on local poudriere if you want 
>> to
>> test it on port A or port B, if we want to reduce the performance penalty we
>> need to be able to make a reproducible case which can then be profiled, to 
>> know
>> where to optimize if needed.
>> 
>> I have tried to reproduce each individual case which happen in the ports tree
>> and I am not able to reproduce them, so impossible to know where to look at
>> exactly.
> 
> I'm hoping to supply reproducible steps.
> 
>> I know what is new and what causes the performance penalty, but not
>> which part is causing the super extra penalty on the cluster.




===
Mark Millard
marklmi at yahoo.com


Reply via email to