thanks for the info Jens, we have a better picture now 🙂 Then, gradle is in charge of spin up the containers, isnt it? We see that the command used to execute the distributed tests is the following:
gradlew gradlewStrict && sed -e 's/JAVA_HOME/GRADLE_JVM/g' -i.bak gradlewStrict && GRADLE_JVM=/usr/lib/jvm/java-8-openjdk-amd64 ./gradlewStrict -PcompileJVM=/usr/lib/jvm/java-8-openjdk-amd64 -PcompileJVMVer=8 -PtestJVM=/usr/lib/jvm/java-11-openjdk-amd64 -PtestJVMVer=11 -PparallelDunit -PdunitDockerUser=geode -PdunitDockerImage=$(docker images --format '{{.Repository}}:{{.Tag}}') -PdunitParallelForks=4 --parallel --console=plain --no-daemon -x javadoc -x spotlessCheck -x rat distributedTest As we were no using any docker image, our tests were running locally, and this is why we were getting errors about "ports in use". Is that right? Because we did not get any error, so I suppose for gradle its ok not getting any docker image for running the tests. Where is this container management in the code? How could we limit the number of containers? With "-PdunitParallelForks" parameter? We dont have as much resources as the community to run our CI, so we have to adapt the execution. Regarding the docker image,is it possible to have permissions to download the docker image from http://gcr.io/apachegeode-ci/apache-develop-test-container ? I suppose the dockerhub image is not being used, as it was updated two years ago and the dockerfile was modified three months ago. thanks again! ________________________________ De: Jens Deppe <jde...@pivotal.io> Enviado: miércoles, 17 de julio de 2019 17:41 Para: dev@geode.apache.org Asunto: Re: Requirements for running distributed tests Hi Alberto, The images used to run tests on in CI are built here: https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-images (you may see these referred to as 'heavy lifters'). The packer scripts for these can be found under here: https://github.com/apache/geode/tree/develop/ci/images The build model is not pure Concourse as, for every test category (ex. distributedTest) we programmatically launch one of these 'heavy-lifters', copy the code over, and then run the tests. As you've noted, distributed tests are run inside docker containers. This gives isolation so that there are no port/filesystem/network conflicts when running tests in parallel. The container used for distributed tests is built from this Dockerfile: https://github.com/apache/geode/tree/develop/ci/images/test-container. Windows tests are not run in parallel as we didn't have success in getting the parallel dockerization to work consistently; so we only have a subset of tests which run in 'normal', serial mode. Hope this helps. --Jens On Wed, Jul 17, 2019 at 3:47 AM Alberto Bustamante Reyes <alberto.bustamante.re...@est.tech> wrote: > Hi Geode community, > > We are trying to set up a CI loop in our Geode fork. We have started with > the tests that are run for every pull requests, but we are having problems > to replicate what is done in the Apache Geode repository, so any help will > be appreciated. > We are not using concour, but we are trying to run the same commands that > are run at the end. > > In the case of distributedTests, if we run them we have problems saying > that there are ports in use, which are not present if we run the tests > independently ( I mean running for example "geode-core:distributedTest", > "geode-cq:distributedTest" instead of just "distributedTest"). So we think > we are missing something regarding the configuration of the VM where the > tests are executed. We have seen there is a custom image used in google > cloud ( > --image-family="${IMAGE_FAMILY_PREFIX}${WINDOWS_PREFIX}geode-builder" ), is > it documented somewhere which are the requirements or configuration of that > image? > > We have seen in the CI configuration ( > https://github.com/apache/geode/blob/develop/ci/pipelines/shared/jinja.variables.yml) > that the requirement for distributedTests are 96 cpus & 180GB RAM. We can > use only 24 cpus and 128GB RAM, but we have seen the tests are executed > using the "dunitParallelForks" parameter to control how many docker > containers are run in parallel, so we suppose we should modify this > parameter. > > Where can we check how are these containers created and controlled? > > Thanks in advance! > > Alberto B. > > >