I'll try to answer your questions regarding the new Coin CI briefly here.
So the underlying hardware we have will not be touched. Our hardware will still 
consist of a farm of Blades (servers will multiple cores and lots of RAM) and 
Mac Minis taking care of the OS X builds. All these, including Mac minis, are 
connected to a Compellent SAN where we store everything. We run vSphere ESXi 
Bare-Metal hypervisors on the hardware and run virtual machines on top of that.

The thing that now changes is that we remove Jenkins from the CI. Previously we 
had ~300 virtual machines up and running 24/7 that were distributed to handle 
different operating systems. This distribution was fixed, which meant that 
platform A might have been queuing for hardware, whereas we had plenty to spare 
designated for platform B.

The new system will not have any virtual machiens up and running, if no build 
is running. As soon as anyone stages something in codereview, we will start 
creating new virtual machines on the fly for those requested platforms.

Second thing that changes is what we actually rebuild. With Jenkins we rebuilt 
all prerequisites every time. Imagine a commit staged for QtBase. We had to 
verify that commit on Windows 7 VS2012, Windows 7 VS2013, Windows 8.1 VS2015, 
Windows 7 MinGW 4.9.x, Ubuntu 14.04, openSUSE, OSX 10.8, OSX 10.9, Android, 
iOS, QNX 650 etc. The combined number platforms tested every time was around 
30. Now, imagine something being staged to QtDeclarative. QtDeclarative 
requires QtBased to be built before itself. So our old CI downloaded QtBase 
sources and built those before proceeding with QtDeclarative. The new CI has 
stored the binaries created by the last QtBase build, so it only has to 
download a tarball from our internal storage and directly proceed with 
QtDeclarative. That again, stores its binaries to the storage. This will save a 
lot of time building these later modules. Only this example saved us from 
building QtBase 30 times.

Things we can also tune more with the new CI is what kind of VMs we create. For 
smaller Qt Modules that don't benefit much of having multiple cores reserved 
for them, get VMs that have perhaps only 2 vCPUs.

We have also split up testing as its own task. This means that even though 
building might get 8 vCPUs, testing that module might only get 2 or 4 again. We 
have also planned on splitting up testing to several VMs where possible, so 
that tests can be run more in parallel than before, thus increasing the 
throughput of the CI.

Things you will notice being different are the comments in codereview. The 
systems create slightly different log outputs, but as the underlying compilers 
and test libraries are the same, compilation and autotest logs will remain the 
same. Links to logs however differ in that previously we had incremental build 
numbers generated by Jenkins, and now we have SHA1's created by Coin 
representing the content that was actually built.

Regarding device support, being it Android or iOS, or any embedded for that 
matter, the situation doesn't really change. We worked upon getting those 
platforms into the Jenkins CI, but never got so far with stabilizing the 
environment that we could use those in enforcing codereview commits. Those 
devices were only used internally for testing the state of modules 
periodically. The new Coin CI will support them eventually, and here I have to 
say 'as well', because they might still be for internal verification only. 
Mainly because it takes so much time to run tests on them currently. As we 
support splitting up the test set to several devices and do testing in 
parallel, we might be able to include device testing already for commits being 
staged in codereview.

Puppet itself is a way of maintaining virtual machines once they are up and 
running, or to initially deploy software on a clean OS. The static machines we 
had were maintained via Puppet, but now that we reclone VMs from a template 
machine, we don't have to update and maintain several machines simultaneously. 
It's just enough that we update the template (master VM) and all subsequent 
clones will have the updated software.

For documentation purposes what we actually have installed, we have been 
playing around with installer scripts that take care of provisioning a clean 
slate OS with everything we need, but with mixed results. The scripts 
themselves take a lot of time writing, and they have to be maintained. But they 
are an asset if we want to install everything from scratch. If any 3rd party 
wants to duplicate our current OS's, they have to rely on the Qt Wiki pages for 
each branch listing the requirements to build Qt.

With Regards,
-Tony

CI Tech Lead

The Qt Company / Digia Finland Ltd, Elektroniikkatie 13, 90590 Oulu, Finland
Email: tony.saraja...@theqtcompany.com<mailto:tony.saraja...@theqtcompany.com>
http://qt.io
Qt Blog: http://blog.qt.digia.com/
Qt Facebook: www.facebook.com/qt<http://www.facebook.com/qt>
Qt Twitter: @QtbyDigia, @Qtproject
_______________________________________________
Interest mailing list
Interest@qt-project.org
http://lists.qt-project.org/mailman/listinfo/interest

Reply via email to