On Wed, Jul 6, 2022 at 9:40 PM Thiago Jung Bauermann <
thiago.bauerm...@linaro.org> wrote:

>
> Hello,
>
> I looked into ccache usage on the LLVM build bots.
>
> Mehdi AMINI <joker....@gmail.com> writes:
>
> > On Wed, Jun 29, 2022 at 3:39 PM Maxim Kuvyrkov <
> maxim.kuvyr...@linaro.org>
> > wrote:
> >
> >> We have experimented with using zorg's CCACHE settings a few years back,
> >> and it turned out to be more robust to configure ccache at the level of
> >> default system (well, container) compiler.
> >>
> >> One thing to check is whether default 5GB cache limit fits us well.
> IIUC,
> >> flang builds are particularly big, and they may overflow the cache size.
> >>
> >
> > Oh yeah, anything under 20GB is likely doomed, in particular if you share
> > the cache across configs (like one machine building gcc and clang).
>
> Yes, we do share the cache like that.
>
> > Can you try to print cache statistics? Maybe tweak the job to clear the
> > stats before the job and print them after each build?
>
> We were using the default ccache size of 5 GB on all the LLVM bots.
> I have increased them now. Some machines have bigger and/or emptier
> disks than others, so I chose different cache sizes on different build
> hosts. I'll provide more detailed information in a separate email.
>
> The machine that does the flang-aarch64-latest-gcc job (and also
> flang-aarch64-latest-clang as well as other flang and clang jobs) has a
> big and relatively empty disk so I increased its cache size to 100 GB.
>
> >> > On 29 Jun 2022, at 16:33, David Spickett <david.spick...@linaro.org>
> >> wrote:
> >> >
> >> > While it's not visible in the zorg config we are using ccache. Except
> >> > we do it by setting the compiler to a script that runs the expected
> >> > clang/gcc via ccache. We can certainly look at using the ccache enable
> >> > in zorg instead (for the first attempt it was easier to do it in a way
> >> > we could control on our end).
> >> >
> >> > Looking at the our flang bots overall 2 hours seems to be the average
> >> > (out of tree is an outlier), I don't know anything about non Linaro
> >> > flang bots. We will check if there is some obvious bottleneck here but
> >> > we have resource constraints that limit how fast we can go even with
> >> > perfect caching. Are there any other bots you were interested in? We
> >> > can check those too.
> >> >
> >> > What build times were you expecting to see? It is useful for us to
> >> > know what expectations are even if, unfortunately, we don't meet them
> >> > at this time.
> >>
> >
> > flang-x86_64-knl-linux seems to to average 15-20min here, which is more
> > like I would expect.
>
> flang-aarch64-latest-gcc builds now take between 10m and 30m, with an
> occasional build taking 1h. flang-aarch64-latest-clang is similar.
>

That's a huge improvement! :)


>
> > Even there they could go much faster: we could avoid building the world
> and
> > only build flang and the test dependencies. Right now the bottleneck is
> > linking all of the LLVM tools that aren't relevant for testing flang.
> >
> > Compare with the way I set up the MLIR bots:
> > https://lab.llvm.org/buildbot/#/builders/61/builds/28582
> > The build step here is exclusively building the binaries needed for
> running
> > `check-mlir` and nothing more.
> >
> > MLIR is smaller than flang, but we're still having a turnaround of 3-5
> min
> > when the cache is hot.
>
> I haven't looked into that approach.
>
> --
> Thiago
>
_______________________________________________
linaro-toolchain mailing list -- linaro-toolchain@lists.linaro.org
To unsubscribe send an email to linaro-toolchain-le...@lists.linaro.org

Reply via email to