On Thu, 2018-08-30 at 14:27 +0100, Emil Velikov wrote:
> Hi Juan,
> 
> Something I should have mentioned ealier:
> Pardon if my question are a bit rough - my rocker/docker experience is 
> limited.


I'm more than happy to answer any question it :)

> 
> On 30 August 2018 at 11:22, Juan A. Suarez Romero <[email protected]> wrote:
> > On Wed, 2018-08-29 at 14:11 +0100, Emil Velikov wrote:
> > > Hi Juan,
> > > 
> > > I've shared a number of suggestions. I'll leave that to you if they
> > > will be in v3 or patches on top.
> > > 
> > > On 29 August 2018 at 11:12, Juan A. Suarez Romero <[email protected]> 
> > > wrote:
> > > 
> > > > In order to build the images, Rocker is used. This is a tool that
> > > > extends the Dockerfiles with new features that are quite interested
> > > > here. The main features we use is the support for templating, and the
> > > > support for mounting external directories during the image building.
> > > > This help to use tools like ccache to improve the build speed.
> > > > 
> > > 
> > > I think that gitlab-ci supports templating - not sure about mounting
> > > external directories.
> > 
> > 
> > We need the templating in the Dockerfile, not in the gitlab-ci itself. Same 
> > for
> > mounting external directories: we need mounting (not a real requirement, but
> > speeds up the build step) inside the docker build process.
> > 
> 
> Simply put, one could see .gitlab-ci.xml as a form of dockerfile,
> since the gitlab-ci uses Docker.
> Former is capable of storing artefacts, although I'd imagine the extra
> docker/rocker here is used solely to share and reuse the artefacts and
> dependencies.
> 

It is like a form of Dockerfile, but it is not, because you can't build the
image, unless GitLab manages to implement some specific feature to allow it.

I'll explain better. What happens is that the .gitlab-ci.xml content is executed
inside a container; that is, like executing `docker run <container> /bin/bash`
and then execute each of the commands in the YAML.

But that's all; it just execute the commands in a container, but it doesn't
generate any image, which is what we want. That's why we use docker/rocker
build.

Docker allows to 'commit' a container as a new image. It would be really useful
if GitLab provides a way to tell "save the result of running the job as an
image"; in that case we could use .gitlab-ci.xml instead of the Dockerfile. But
at this moment, we can't.


> Am I in the right ballpark?
> 
> Asking all these questions since the double-docker does seem a bit strange.
> 
> 
> > > > +before_script:
> > > > +  - mkdir -p ccache
> > > > +  - rm -fr ../ccache
> > > > +  - mv ccache ../
> > > 
> > > nit: how does this look
> > > rm -rf ../ccache
> > > mkdir -p ../ccache
> > > 
> > 
> > It wouldn't work.
> > 
> 
> Hmm something sounds badly broken.
> I would imagine you (or perhaps me) misread some the .. in the commands.
> 
> 
> > > Although why do we .. in the first place? Mind adding a comment?
> > 
> > 
> > Yup. First of all, we cache the ccache directory in GitLab, so when 
> > re-building
> > a Docker image we save time.
> > 
> > GitLab only caches directories that are in the same path as the git 
> > clone/fetch
> > is done. That is, it restores/saves directories that are in the same place 
> > as
> > the full Mesa code.
> > 
> > But we don't want to have the ccache there, because when building the image 
> > we
> > add all the cloned content. And this would add inside the image the full 
> > ccache.
> > We don't want this, but just mount it externally.
> > 
> 
> AKA keeping the docker images small, while sharing the ccache. Makes sense.
> 
> > Thus, we move the restored ccache to a different place at the beginning 
> > (the "mv
> > ccache ../" in the before_script) and move back at the end so GitLab can 
> > store
> > the new cache (the "mv ../ccache ./" in the after_script).
> > 
> > As the very first time there is no ccache to restore, we ensure there's 
> > always a
> > ccache directory (hence, the mkdir call). The "rmdir ../ccache" is to ensure
> > there's nothing there before doing the move. I think this is a safe 
> > measure, as
> > usually there shouldn't be nothing there.
> > 
> 
> I think that a cancelled job may leave a ../ccache around.
> 
> 
> > > 
> > > I'm fairly sure we can drop anything older than 3.9 through the series.
> > > Jose was pretty clear that they're aiming/moved to llvm 5.0
> > > 
> > 
> > I'm fine with this. We test so many versions because those are the minimum
> > required versions according to configure.ac.
> > 
> > If we think it is not needed to check older versions, we can remove them. 
> > On the
> > other hand, we could just keep them when testing release branches, and 
> > remove
> > when testing master.
> > 
> 
> Good call. Thinking about it - keeping the LLVM bits in your patches
> as-is and tweaking the version (in configure/travis/gitlab-ci/etc)
> would be a better.
> 
> > 
> > > 
> > > > +RUN apt-get update                                                     
> > > >                 \
> > > > +  && apt-get --no-install-recommends -y install autoconf automake gcc 
> > > > g++ libtool-bin  \
> > > > +    pkg-config gettext ccache make scons bison flex sudo git wget 
> > > > bzip2 xz-utils       \
> > > > +    libclc-dev libelf-dev libexpat1-dev libffi-dev 
> > > > libomxil-bellagio-dev               \
> > > > +    libpciaccess-dev libx11-xcb-dev libxdamage-dev libxml2-dev 
> > > > libxrender-dev          \
> > > > +    libxvmc-dev libunwind-dev zlib1g-dev python-pip python-setuptools 
> > > > python-wheel     \
> > > > +    python3-pip python3-setuptools python3-wheel                       
> > > >                 \
> > > > +  && rm -fr /var/lib/apt/lists/*
> > > 
> > > Why do we need the rm after apt-get?
> > > 
> > > 
> > 
> > This is suggested by Docker as ag ood pattern to install packages in Docker
> > images. We update + install package + remove the files generated by the 
> > update
> > everything in a single command, so we keep the Docker image size reduced to 
> > the
> > minimum.
> > 
> 
> Fair enough. I was thinking about the duplicated "sync the local package DB".
> Which is admittedly nothing comparing to keeping the images small.
> 
> > > > +USER local
> > > > +
> > > 
> > > A user with name "local" sounds a bit strange. Is that the recommendation?
> > 
> > We just picked "local" as username, but we're fine to use a different name 
> > (like
> > "mesa" or "user").
> > 
> 
> Personally I would use base-builder/llvm-builder/mesa-builder for the
> respective files.
> It tends to make things a bit more obvious ... but that's just me ;-)
> 

The thing is that in the final image, you would end up with 3 users, as the
users are inherited by the images. I think it would confuse people that uses the
final image.

I think using one user is better; and probably "builder" is a better name :)

> > Usually in Docker everything is done as "root". But in order to reduce 
> > risks,
> > and be more real (developers never build Mesa as root, or shouldn't at 
> > least!)
> > we build everything as non-root.
> > 
> 
> Agreed, thank you for that.
> 
> 
> > > Thinking out loud:
> > > There should be a way to make this a trivial function and feed it the
> > > static data.
> > > ...
> > > wget foo
> > > tar filename(foo)
> > > cd basename(foo)
> > > ...
> > > 
> > 
> > We couldn't figure out how to do it. Maybe there's a way to do loops inside 
> > the
> > Rockerfiles. At this moment, this is very similar to what we do in .travis.
> > 
> 
> Agreed it not pretty, and we already do it. Perhaps we could flesh
> some of this/travis bits into a shell script or two and reuse it?
> 
> Something to consider after this work has landed. Please do not bother
> with this now.
> 
> > > 
> > > > --- /dev/null
> > > > +++ b/gitlab-ci/Rockerfile.llvm
> > > > +{{ if eq .LLVM "3.3" }}
> > > > +RUN wget 
> > > > https://people.igalia.com/jasuarez/packages/llvm-3.3_3.3-+checkinstall1_amd64.deb
> > > >       \
> > > > +  && dpkg -i llvm-3.3_3.3-+checkinstall1_amd64.deb                     
> > > >                          \
> > > > +  && rm llvm-3.3_3.3-+checkinstall1_amd64.deb
> > > > +ENV LD_LIBRARY_PATH=/usr/lib/llvm-{{ .LLVM }}/lib:$LD_LIBRARY_PATH
> > > > +
> > > 
> > > With 3.3 gone, there's no need to build our own LLVM package ;-)
> > 
> > According to configure.ac, the minimum required LLVM version for Gallium 
> > drivers
> > is 3.3. This is why we also use LLVM 3.6 and 3.8: to test the Gallium 
> > drivers.
> > 
> 
> I seriously doubt anyone is using 3.3 - it was released in Jun 2013.
> Would be great to just drop it for all branches, but we could start
> with master, as you suggested earlier.
> 
> > > 
> > > > +{{ else }}
> > > > +MOUNT .:/context
> > > > +RUN ./autogen.sh                                                \
> > > > +  && make distcheck                                             \
> > > > +  && __version=`cat VERSION`                                    \
> > > > +  && mkdir -p /context/release-output                           \
> > > > +  && mv /home/local/mesa-head.txt /context/release-output       \
> > > > +  && mv mesa-$__version.tar.xz /context/release-output          \
> > > > +  && sudo rm -fr /home/local/mesa
> > > > +
> > > > +{{ else if eq .BUILD "autotools" "gallium" }}
> > > > +
> > > > +RUN export LLVM={{ $llvm_version }}.0\
> > > > +  && eval `cat configure.ac | egrep ^LLVM_REQUIRED`                    
> > > >                                                  \
> > > > +  && if dpkg --compare-versions $LLVM ge $LLVM_REQUIRED_GALLIUM ; then 
> > > > GALLIUM_DRIVERS=i915,etnaviv,freedreno,imx,nouveau,pl111,r300,svga,swrast,tegra,v3d,vc4,virgl
> > > >  ; fi \
> > > > +  && if dpkg --compare-versions $LLVM ge $LLVM_REQUIRED_R600 ; then 
> > > > GALLIUM_DRIVERS=$GALLIUM_DRIVERS,r600 ; fi          \
> > > > +  && if dpkg --compare-versions $LLVM ge $LLVM_REQUIRED_RADEONSI ; 
> > > > then GALLIUM_DRIVERS=$GALLIUM_DRIVERS,radeonsi ; fi  \
> > > > +  && if dpkg --compare-versions $LLVM ge $LLVM_REQUIRED_SWR ; then 
> > > > GALLIUM_DRIVERS=$GALLIUM_DRIVERS,swr ; fi            \
> > > 
> > > /me dreams of a day where only a single LLVM_REQUIRED will be available
> > > 
> > > > +  && DRI_DRIVERS=i915,i965,nouveau,r200,radeon,swrast                  
> > > >                                                  \
> > > > +  && VULKAN_DRIVERS=intel                                              
> > > >                                                  \
> > > > +  && if dpkg --compare-versions $LLVM ge $LLVM_REQUIRED_RADV ; then 
> > > > VULKAN_DRIVERS=$VULKAN_DRIVERS,radeon ; fi          \
> > > > +  && ./autogen.sh                                                      
> > > >                                                  \
> > > > +    --with-gallium-drivers=$GALLIUM_DRIVERS                            
> > > >                                                  \
> > > > +    {{ if eq .BUILD "gallium" }} --with-dri-drivers=""                 
> > > >                                                  \
> > > > +    {{ else }} --with-dri-drivers=$DRI_DRIVERS                         
> > > >                                                  \
> > > > +    --with-vulkan-drivers=$VULKAN_DRIVERS                              
> > > >                                                  \
> > > > +    --with-platforms=x11,drm,wayland {{ end }}                         
> > > >                                                  \
> > > 
> > > Omitting a vulkan/dri/gallium drivers list will default to building
> > > some of them :-\
> > 
> > Right. We need to add '--with-vulkan-drivers=""' when BUILD is gallium, as 
> > we
> > are not interested in the Vulkan driver.
> > 
> > > Perhaps we could set the *DRIVERS variables conditionally, defaulting to 
> > > "".
> > 
> > We are using three variables, GALLIUM_DRIVERS, DRI_DRIVERS and 
> > VULKAN_DRIVERS
> > that are updated depending on BUILD and also the LLVM version.
> > 
> 
> It seems to me that the following happens:
> 
> if .BUILD gallium
>  dri-drivers = empty list
>  gallium-drivers = some list
>  vulkan-drivers = some list
> else
>  dri-drivers = some list
>  gallium-drivers = some list
>  vulkan-drivers = some list
> 
> AKA both gallium and vulkan drivers are built twice ... to some extend.

As I said, for the GALLIUM we could to set the vulkan drivers list to empty, so
they are not built twice. But if the Vulkan drivers depend on the LLVM version
like the GALLIUM drivers (I'm thinking here in the Radeon driver), then maybe it
would be good to include them.

> The current code assumes that LLVM_REQUIRED_GALLIUM - implies some
> gallium drivers.
> That's wrong - none of the ones listed need LLVM. If we want LLVM 3.3
> testing we could stick with something short like Travis.

Can you build the GALLIUM drivers without LLVM? According to configure.ac, you
need at LLVM 3.3 to build GALLIUM drivers.

> 
> I'd suggest:
> if .BUILD gallium - aka LLVM version fuzz; from no LLVM to latest
>  dri-drivers = empty list
>  gallium-drivers = some list -> details vary on LLVM version available
>  vulkan-drivers = empty list
> else - with a LLVM version for radv
>  dri-drivers = full list
>  gallium-drivers = empty list
>  vulkan-drivers = full list
> 
> 
> 
> > > 
> > > 
> > > > +    {{ if ne $llvm_version "0.0" }} --enable-llvm 
> > > > --enable-llvm-shared-libs {{ end }}                                   \
> > > > +    {{ if ne $debug_build "false" }} --enable-debug {{ end }}          
> > > >                                                  \
> > > > +    --enable-glx-tls --enable-gbm --enable-egl                         
> > > >                                                  \
> > > > +  && make                                                              
> > > >                                                  \
> > > > +  && make check                                                        
> > > >                                                  \
> > > > +  && sudo make install                                                 
> > > >                                                  \
> > > > +  && sudo ldconfig                                                     
> > > >                                                  \
> > > > +  && sudo rm -fr /home/local/mesa
> > > > +
> > > > +{{ else if eq .BUILD "meson" }}
> > > > +
> > > > +RUN meson  _build                       \
> > > 
> > > Worth adding the gallium/vulkan/dri drivers list here?
> > > 
> > 
> > Didn't add the list because this depends on the LLVM version we are using. 
> > So we
> > trust meson correctly picks all the drivers.
> > 
> > If we want to be explicit, maybe we can reuse the list of
> > DRI_/VULKAN_/GALLIUM_DRIVERS here.
> > 
> 
> Agreed. The same reasoning as f9d0e7d3bcd831d52af6a6c46aac4ed4590a8615
> applies here.
> If you agree feel free to keep that as a follow-up patch.
> 

This makes me think that we should use the same driver names in meson and
autotools. For instance, in Vulkan right now we have "radeon" when using
autotools and "amd" when using meson.


> Thanks again for answering my noobish questions :-)
> 

My pleasure.


        J.A.

> HTH
> Emil
> 

_______________________________________________
mesa-dev mailing list
[email protected]
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to