This is an automated email from the ASF dual-hosted git repository.

jongyoul pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/zeppelin.git


The following commit(s) were added to refs/heads/master by this push:
     new 9070d9f3a0 [ZEPPELIN-5777] Fix dead links in docs (#4418)
9070d9f3a0 is described below

commit 9070d9f3a0c20b7ec3569c3b30890544b42ede25
Author: dntjr8096 <lkjs8...@naver.com>
AuthorDate: Fri Jul 22 18:02:24 2022 +0900

    [ZEPPELIN-5777] Fix dead links in docs (#4418)
---
 docs/development/contribution/how_to_contribute_code.md | 4 ++--
 docs/development/writing_zeppelin_interpreter.md        | 2 +-
 docs/interpreter/bigquery.md                            | 2 +-
 docs/interpreter/geode.md                               | 8 ++++----
 docs/interpreter/submarine.md                           | 8 ++++----
 docs/quickstart/spark_with_zeppelin.md                  | 2 +-
 docs/setup/deployment/cdh.md                            | 2 +-
 docs/setup/deployment/virtual_machine.md                | 4 ++--
 docs/usage/other_features/cron_scheduler.md             | 2 +-
 9 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/docs/development/contribution/how_to_contribute_code.md 
b/docs/development/contribution/how_to_contribute_code.md
index 719317775c..32543c0797 100644
--- a/docs/development/contribution/how_to_contribute_code.md
+++ b/docs/development/contribution/how_to_contribute_code.md
@@ -137,14 +137,14 @@ cd <zeppelin_home>/zeppelin-interpreter/src/main/thrift
 
 ### Run Selenium test
 
-Zeppelin has [set of integration 
tests](https://github.com/apache/zeppelin/tree/master/zeppelin-server/src/test/java/org/apache/zeppelin/integration)
 using Selenium. To run these test, first build and run Zeppelin and make sure 
Zeppelin is running on port 8080. Then you can run test using following command
+Zeppelin has [set of integration 
tests](https://github.com/apache/zeppelin/tree/master/zeppelin-integration/src/test/java/org/apache/zeppelin/integration)
 using Selenium. To run these test, first build and run Zeppelin and make sure 
Zeppelin is running on port 8080. Then you can run test using following command
 
 ```bash
 TEST_SELENIUM=true ./mvnw test -Dtest=[TEST_NAME] -DfailIfNoTests=false \
 -pl 'zeppelin-interpreter,zeppelin-zengine,zeppelin-server'
 ```
 
-For example, to run 
[ParagraphActionIT](https://github.com/apache/zeppelin/blob/master/zeppelin-server/src/test/java/org/apache/zeppelin/integration/ParagraphActionsIT.java),
+For example, to run 
[ParagraphActionIT](https://github.com/apache/zeppelin/blob/master/zeppelin-integration/src/test/java/org/apache/zeppelin/integration/ParagraphActionsIT.java),
 
 ```bash
 TEST_SELENIUM=true ./mvnw test -Dtest=ParagraphActionsIT -DfailIfNoTests=false 
\
diff --git a/docs/development/writing_zeppelin_interpreter.md 
b/docs/development/writing_zeppelin_interpreter.md
index 33ecee1631..fa4970a293 100644
--- a/docs/development/writing_zeppelin_interpreter.md
+++ b/docs/development/writing_zeppelin_interpreter.md
@@ -236,7 +236,7 @@ To configure your interpreter you need to follow these 
steps:
 2. In the interpreter page, click the `+Create` button and configure your 
interpreter properties.
 Now you are done and ready to use your interpreter.
 
-> **Note :** Interpreters released with zeppelin have a [default 
configuration](https://github.com/apache/zeppelin/blob/master/zeppelin-zengine/src/main/java/org/apache/zeppelin/conf/ZeppelinConfiguration.java#L397)
 which is used when there is no `conf/zeppelin-site.xml`.
+> **Note :** Interpreters released with zeppelin have a [default 
configuration](https://github.com/apache/zeppelin/blob/master/zeppelin-interpreter/src/main/java/org/apache/zeppelin/conf/ZeppelinConfiguration.java#L928)
 which is used when there is no `conf/zeppelin-site.xml`.
 
 ## Use your interpreter
 
diff --git a/docs/interpreter/bigquery.md b/docs/interpreter/bigquery.md
index cdac762f6d..1a585437f8 100644
--- a/docs/interpreter/bigquery.md
+++ b/docs/interpreter/bigquery.md
@@ -68,7 +68,7 @@ In a notebook, to enable the **BigQuery** interpreter, click 
the **Gear** icon a
 Within Google Cloud Platform (e.g. Google App Engine, Google Compute Engine),
 built-in credentials are used by default.
 
-Outside of GCP, follow the Google API authentication instructions for 
[Zeppelin Google Cloud 
Storage](https://zeppelin.apache.org/docs/latest/storage/storage.html#notebook-storage-in-gcs)
+Outside of GCP, follow the Google API authentication instructions for 
[Zeppelin Google Cloud 
Storage](https://zeppelin.apache.org/docs/latest/setup/storage/storage.html#notebook-storage-in-google-cloud-storage)
 
 ## Using the BigQuery Interpreter
 
diff --git a/docs/interpreter/geode.md b/docs/interpreter/geode.md
index 436c308c5c..bd6bb7bbf9 100644
--- a/docs/interpreter/geode.md
+++ b/docs/interpreter/geode.md
@@ -37,7 +37,7 @@ limitations under the License.
   </tr>
 </table>
 
-This interpreter supports the [Geode](http://geode.incubator.apache.org/) 
[Object Query Language 
(OQL)](http://geode-docs.cfapps.io/docs/developing/querying_basics/oql_compared_to_sql.html).
  
+This interpreter supports the [Geode](http://geode.incubator.apache.org/) 
[Object Query Language 
(OQL)](https://geode.incubator.apache.org/docs/guide/latest/developing/querying_basics/query_basics.html).
  
 With the OQL-based querying language:
 
 [<img align="right" src="http://img.youtube.com/vi/zvzzA9GXu3Q/3.jpg"; 
alt="zeppelin-view" hspace="10" 
width="200"></img>](https://www.youtube.com/watch?v=zvzzA9GXu3Q)
@@ -104,7 +104,7 @@ The Geode interpreter expresses the following properties:
 ### Create / Destroy Regions
 
 The OQL specification does not support  [Geode 
Regions](https://cwiki.apache.org/confluence/display/GEODE/Index#Index-MainConceptsandComponents)
 mutation operations. 
-To `create`/`destroy` regions one should use the 
[GFSH](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/chapter_overview.html)
 shell tool instead. 
+To `create`/`destroy` regions one should use the 
[GFSH](https://geode.incubator.apache.org/docs/guide/latest/tools_modules/gfsh/chapter_overview.html)
 shell tool instead. 
 In the following it is assumed that the GFSH is colocated with Zeppelin server.
 
 ```bash
@@ -126,7 +126,7 @@ EOF
 Above snippet re-creates two regions: `regionEmployee` and `regionCompany`. 
 Note that you have to explicitly specify the locator host and port. 
 The values should match those you have used in the Geode Interpreter 
configuration. 
-Comprehensive list of [GFSH Commands by Functional 
Area](http://geode-docs.cfapps.io/docs/tools_modules/gfsh/gfsh_quick_reference.html).
+Comprehensive list of [GFSH Commands by Functional 
Area](https://geode.incubator.apache.org/docs/guide/latest/tools_modules/gfsh/gfsh_quick_reference.html).
 
 ### Basic OQL
 ```sql
@@ -188,7 +188,7 @@ SELECT * FROM /regionEmployee e WHERE e.employeeId > ${Id}
 The Geode Interpreter provides a basic auto-completion functionality. On 
`(Ctrl+.)` it list the most relevant suggestions in a pop-up window.
 
 ## Geode REST API
-To list the defined regions you can use the [Geode REST 
API](http://geode-docs.cfapps.io/docs/geode_rest/chapter_overview.html):
+To list the defined regions you can use the [Geode REST 
API](https://geode.apache.org/docs/guide/latest/rest_apps/chapter_overview.html):
 
 ```
 http://<geode server hostname>phd1.localdomain:8484/gemfire-api/v1/
diff --git a/docs/interpreter/submarine.md b/docs/interpreter/submarine.md
index cfad6255fc..3031155f6d 100644
--- a/docs/interpreter/submarine.md
+++ b/docs/interpreter/submarine.md
@@ -23,7 +23,7 @@ limitations under the License.
 
 <div id="toc"></div>
 
-[Hadoop Submarine ](https://hadoop.apache.org/submarine/) is the latest 
machine learning framework subproject in the Hadoop 3.1 release. It allows 
Hadoop to support Tensorflow, MXNet, Caffe, Spark, etc. A variety of deep 
learning frameworks provide a full-featured system framework for machine 
learning algorithm development, distributed model training, model management, 
and model publishing, combined with hadoop's intrinsic data storage and data 
processing capabilities to enable data scie [...]
+[Hadoop Submarine ](https://submarine.apache.org/) is the latest machine 
learning framework subproject in the Hadoop 3.1 release. It allows Hadoop to 
support Tensorflow, MXNet, Caffe, Spark, etc. A variety of deep learning 
frameworks provide a full-featured system framework for machine learning 
algorithm development, distributed model training, model management, and model 
publishing, combined with hadoop's intrinsic data storage and data processing 
capabilities to enable data scientists  [...]
 
 A deep learning algorithm project requires data acquisition, data processing, 
data cleaning, interactive visual programming adjustment parameters, algorithm 
testing, algorithm publishing, algorithm job scheduling, offline model 
training, model online services and many other processes and processes. 
Zeppelin is a web-based notebook that supports interactive data analysis. You 
can use SQL, Scala, Python, etc. to make data-driven, interactive, 
collaborative documents.
 
@@ -385,7 +385,7 @@ The docker images file is stored in the 
`zeppelin/scripts/docker/submarine` dire
 + **Submarine interpreter BUG**
   If you encounter a bug for this interpreter, please create a sub **JIRA** 
ticket on [ZEPPELIN-3856](https://issues.apache.org/jira/browse/ZEPPELIN-3856).
 + **Submarine Running problem**
-  If you encounter a problem for Submarine runtime, please create a **ISSUE** 
on 
[hadoop-submarine-ecosystem](https://github.com/hadoopsubmarine/hadoop-submarine-ecosystem).
+  If you encounter a problem for Submarine runtime, please create a **ISSUE** 
on [apache-hadoop-submarine](https://github.com/apache/submarine/issues).
 + **YARN Submarine BUG**
   If you encounter a bug for Yarn Submarine, please create a **JIRA** ticket 
on [SUBMARINE](https://issues.apache.org/jira/browse/SUBMARINE).
 
@@ -399,9 +399,9 @@ The docker images file is stored in the 
`zeppelin/scripts/docker/submarine` dire
   + You can use the hadoop version of the hadoop submarine team git repository.
 
 2. **Submarine runtime environment**
-  you can use Submarine-installer https://github.com/hadoopsubmarine, Deploy 
Docker and network environments.
+  you can use Submarine-installer https://github.com/apache/submarine, Deploy 
Docker and network environments.
 
 ## More
 
-**Hadoop Submarine Project**: https://hadoop.apache.org/submarine
+**Hadoop Submarine Project**: https://submarine.apache.org/
 **Youtube Submarine Channel**: 
https://www.youtube.com/channel/UC4JBt8Y8VJ0BW0IM9YpdCyQ
\ No newline at end of file
diff --git a/docs/quickstart/spark_with_zeppelin.md 
b/docs/quickstart/spark_with_zeppelin.md
index 7250f00d3a..7afa608e74 100644
--- a/docs/quickstart/spark_with_zeppelin.md
+++ b/docs/quickstart/spark_with_zeppelin.md
@@ -28,7 +28,7 @@ limitations under the License.
 For a brief overview of Apache Spark fundamentals with Apache Zeppelin, see 
the following guide:
 
 - **built-in** Apache Spark integration.
-- With [Spark Scala](https://spark.apache.org/docs/latest/quick-start.html) 
[SparkSQL](http://spark.apache.org/sql/), 
[PySpark](https://spark.apache.org/docs/latest/api/python/pyspark.html), 
[SparkR](https://spark.apache.org/docs/latest/sparkr.html)
+- With [Spark Scala](https://spark.apache.org/docs/latest/quick-start.html) 
[SparkSQL](http://spark.apache.org/sql/), 
[PySpark](https://spark.apache.org/docs/latest/api/python/), 
[SparkR](https://spark.apache.org/docs/latest/sparkr.html)
 - Inject 
[SparkContext](https://spark.apache.org/docs/latest/api/java/org/apache/spark/SparkContext.html),
 [SQLContext](https://spark.apache.org/docs/latest/sql-programming-guide.html) 
and 
[SparkSession](https://spark.apache.org/docs/latest/sql-programming-guide.html) 
automatically
 - Canceling job and displaying its progress 
 - Supports different modes: local, standalone, yarn(client & cluster), k8s
diff --git a/docs/setup/deployment/cdh.md b/docs/setup/deployment/cdh.md
index 20f819b4ee..485cd34935 100644
--- a/docs/setup/deployment/cdh.md
+++ b/docs/setup/deployment/cdh.md
@@ -25,7 +25,7 @@ limitations under the License.
 
 ### 1. Import Cloudera QuickStart Docker image
 
->[Cloudera](http://www.cloudera.com/) has officially provided CDH Docker Hub 
in their own container. Please check [this guide 
page](http://www.cloudera.com/documentation/enterprise/latest/topics/quickstart_docker_container.html#cloudera_docker_container)
 for more information.
+>[Cloudera](http://www.cloudera.com/) has officially provided CDH Docker Hub 
in their own container. Please check [this guide 
page](https://hub.docker.com/r/cloudera/quickstart/) for more information.
 
 You can import the Docker image by pulling it from Cloudera Docker Hub.
 
diff --git a/docs/setup/deployment/virtual_machine.md 
b/docs/setup/deployment/virtual_machine.md
index 9288dfbdfd..0578b9caa7 100644
--- a/docs/setup/deployment/virtual_machine.md
+++ b/docs/setup/deployment/virtual_machine.md
@@ -33,14 +33,14 @@ For SparkR users, this script includes several helpful [R 
Libraries](#r-extras).
 
 ### Prerequisites
 
-This script requires three applications, 
[Ansible](http://docs.ansible.com/ansible/intro_installation.html#latest-releases-via-pip
 "Ansible"), [Vagrant](http://www.vagrantup.com "Vagrant") and [Virtual 
Box](https://www.virtualbox.org/ "Virtual Box").  All of these applications are 
freely available as Open Source projects and extremely easy to set up on most 
operating systems.
+This script requires three applications, [Ansible](https://www.ansible.com/ 
"Ansible"), [Vagrant](http://www.vagrantup.com "Vagrant") and [Virtual 
Box](https://www.virtualbox.org/ "Virtual Box").  All of these applications are 
freely available as Open Source projects and extremely easy to set up on most 
operating systems.
 
 ## Create a Zeppelin Ready VM
 
 If you are running Windows and don't yet have python installed, [install 
Python 2.7.x](https://www.python.org/downloads/release/python-2710/) first.
 
 1. Download and Install Vagrant:  [Vagrant 
Downloads](http://www.vagrantup.com/downloads.html)
-2. Install Ansible:  [Ansible Python pip 
install](http://docs.ansible.com/ansible/intro_installation.html#latest-releases-via-pip)
+2. Install Ansible:  [Ansible Python pip 
install](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#pip-install)
 
     ```bash
     sudo easy_install pip
diff --git a/docs/usage/other_features/cron_scheduler.md 
b/docs/usage/other_features/cron_scheduler.md
index 05fabd7a3d..a505c0ace3 100644
--- a/docs/usage/other_features/cron_scheduler.md
+++ b/docs/usage/other_features/cron_scheduler.md
@@ -39,7 +39,7 @@ You can set a cron schedule easily by clicking each option 
such as `1m` and `5m`
 
 ### Cron expression
 
-You can set the cron schedule by filling in this form. Please see [Cron 
Trigger 
Tutorial](http://www.quartz-scheduler.org/documentation/quartz-2.2.x/tutorials/crontrigger)
 for the available cron syntax.
+You can set the cron schedule by filling in this form. Please see [Cron 
Trigger 
Tutorial](https://www.quartz-scheduler.org/documentation/quartz-2.2.2/tutorials/tutorial-lesson-06.html)
 for the available cron syntax.
 
 ### Cron executing user (It is removed from 0.8 where it enforces the cron 
execution user to be the note owner for security purpose)
 

Reply via email to