This is an automated email from the ASF dual-hosted git repository. pdallig pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/zeppelin.git
The following commit(s) were added to refs/heads/master by this push: new 91f6cdf [ZEPPELIN-5504] Update Dockerfile 91f6cdf is described below commit 91f6cdfb1974966c4e25a773be2a999b4d6dea77 Author: latch-lio <88391418+latch-...@users.noreply.github.com> AuthorDate: Tue Aug 31 13:54:03 2021 +0200 [ZEPPELIN-5504] Update Dockerfile ### What is this PR for? Updating the version as the original version is not available anymore in https://downloads.apache.org/spark/ ### What type of PR is it? Bug Fix ### Todos * [ ] - Task ### What is the Jira issue? [ticket issue](https://issues.apache.org/jira/browse/ZEPPELIN-5504) ### How should this be tested? * Strongly recommended: add automated unit tests for any new or changed behavior * Outline any manual steps to test the PR here. ### Screenshots (if appropriate) ### Questions: * Does the licenses files need update? * Is there breaking changes for older versions? * Does this needs documentation? Author: latch-lio <88391418+latch-...@users.noreply.github.com> Closes #4211 from latch-lio/patch-1 and squashes the following commits: b97cd4091 [latch-lio] Update Dockerfile 4b8f3d85e [latch-lio] Update Dockerfile --- scripts/docker/spark-cluster-managers/spark_standalone/Dockerfile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/scripts/docker/spark-cluster-managers/spark_standalone/Dockerfile b/scripts/docker/spark-cluster-managers/spark_standalone/Dockerfile index e4fb780..6b3646e 100644 --- a/scripts/docker/spark-cluster-managers/spark_standalone/Dockerfile +++ b/scripts/docker/spark-cluster-managers/spark_standalone/Dockerfile @@ -15,7 +15,7 @@ FROM centos:centos7 ENV SPARK_PROFILE 2.4 -ENV SPARK_VERSION 2.4.0 +ENV SPARK_VERSION 2.4.8 ENV HADOOP_PROFILE 2.7 ENV SPARK_HOME /usr/local/spark @@ -39,7 +39,7 @@ ENV JAVA_HOME /usr/lib/jvm/java ENV PATH $PATH:$JAVA_HOME/bin # install spark -RUN curl -s http://www.apache.org/dist/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop$HADOOP_PROFILE.tgz | tar -xz -C /usr/local/ +RUN curl -s https://downloads.apache.org/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop$HADOOP_PROFILE.tgz | tar -xz -C /usr/local/ RUN cd /usr/local && ln -s spark-$SPARK_VERSION-bin-hadoop$HADOOP_PROFILE spark # update boot script