Repository: zeppelin
Updated Branches:
  refs/heads/master 53f8fa167 -> 3c5d1283b


ZEPPELIN-1985 Remove user from pig tutorial note

### What is this PR for?
Should remove the user from pig tutorial note, otherwise it can not be seen in 
anonymous mode.

### What type of PR is it?
[Bug Fix]

### Todos
* [ ] - Task

### What is the Jira issue?
https://issues.apache.org/jira/browse/ZEPPELIN-1985

### How should this be tested?
Tested it in anonymous mode

### Screenshots (if appropriate)

![image](https://cloud.githubusercontent.com/assets/164491/22135013/5f34ad78-df06-11e6-9043-fffce363bb9e.png)

### Questions:
* Does the licenses files need update? No
* Is there breaking changes for older versions? No
* Does this needs documentation? No

Author: Jeff Zhang <zjf...@apache.org>

Closes #1915 from zjffdu/ZEPPELIN-1985 and squashes the following commits:

4432f89 [Jeff Zhang] add downloading bank.csv
c92b8f3 [Jeff Zhang] ZEPPELIN-1985. Remove user from pig tutorial note


Project: http://git-wip-us.apache.org/repos/asf/zeppelin/repo
Commit: http://git-wip-us.apache.org/repos/asf/zeppelin/commit/3c5d1283
Tree: http://git-wip-us.apache.org/repos/asf/zeppelin/tree/3c5d1283
Diff: http://git-wip-us.apache.org/repos/asf/zeppelin/diff/3c5d1283

Branch: refs/heads/master
Commit: 3c5d1283b58fbb54057526bd85fd0b95c6a4da90
Parents: 53f8fa1
Author: Jeff Zhang <zjf...@apache.org>
Authored: Sun Jan 22 12:52:55 2017 +0800
Committer: Mina Lee <mina...@apache.org>
Committed: Sun Jan 22 14:59:58 2017 +0900

----------------------------------------------------------------------
 notebook/2C57UKYWR/note.json | 112 ++++++++++++++++++++++----------------
 1 file changed, 65 insertions(+), 47 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/zeppelin/blob/3c5d1283/notebook/2C57UKYWR/note.json
----------------------------------------------------------------------
diff --git a/notebook/2C57UKYWR/note.json b/notebook/2C57UKYWR/note.json
index 8292592..21d1231 100644
--- a/notebook/2C57UKYWR/note.json
+++ b/notebook/2C57UKYWR/note.json
@@ -2,8 +2,8 @@
   "paragraphs": [
     {
       "text": "%md\n\n\n### [Apache Pig](http://pig.apache.org/) is a platform 
for analyzing large data sets that consists of a high-level language for 
expressing data analysis programs, coupled with infrastructure for evaluating 
these programs. The salient property of Pig programs is that their structure is 
amenable to substantial parallelization, which in turns enables them to handle 
very large data sets.\n\nPig\u0027s language layer currently consists of a 
textual language called Pig Latin, which has the following key properties:\n\n* 
Ease of programming. It is trivial to achieve parallel execution of simple, 
\"embarrassingly parallel\" data analysis tasks. Complex tasks comprised of 
multiple interrelated data transformations are explicitly encoded as data flow 
sequences, making them easy to write, understand, and maintain.\n* Optimization 
opportunities. The way in which tasks are encoded permits the system to 
optimize their execution automatically, allowing the user to focus on 
 semantics rather than efficiency.\n* Extensibility. Users can create their own 
functions to do special-purpose processing.\n",
-      "user": "user1",
-      "dateUpdated": "Jan 6, 2017 3:55:03 PM",
+      "user": "anonymous",
+      "dateUpdated": "Jan 22, 2017 12:48:50 PM",
       "config": {
         "colWidth": 12.0,
         "enabled": true,
@@ -33,15 +33,15 @@
       "jobName": "paragraph_1483277502513_1156234051",
       "id": "20170101-213142_1565013608",
       "dateCreated": "Jan 1, 2017 9:31:42 PM",
-      "dateStarted": "Jan 6, 2017 3:55:03 PM",
-      "dateFinished": "Jan 6, 2017 3:55:04 PM",
+      "dateStarted": "Jan 22, 2017 12:48:50 PM",
+      "dateFinished": "Jan 22, 2017 12:48:51 PM",
       "status": "FINISHED",
       "progressUpdateIntervalMs": 500
     },
     {
       "text": "%md\n\nThis pig tutorial use pig to do the same thing as spark 
tutorial. The default mode is mapreduce, you can also use other modes like 
local/tez_local/tez. For mapreduce mode, you need to have hadoop installed and 
export `HADOOP_CONF_DIR` in `zeppelin-env.sh`\n\nThe tutorial consists of 3 
steps.\n\n* Use shell interpreter to download bank.csv and upload it to hdfs\n* 
use `%pig` to process the data\n* use `%pig.query` to query the data",
-      "user": "user1",
-      "dateUpdated": "Jan 6, 2017 3:55:18 PM",
+      "user": "anonymous",
+      "dateUpdated": "Jan 22, 2017 12:48:55 PM",
       "config": {
         "colWidth": 12.0,
         "enabled": true,
@@ -71,15 +71,51 @@
       "jobName": "paragraph_1483689316217_-629483391",
       "id": "20170106-155516_1050601059",
       "dateCreated": "Jan 6, 2017 3:55:16 PM",
-      "dateStarted": "Jan 6, 2017 3:55:18 PM",
-      "dateFinished": "Jan 6, 2017 3:55:18 PM",
+      "dateStarted": "Jan 22, 2017 12:48:55 PM",
+      "dateFinished": "Jan 22, 2017 12:48:55 PM",
+      "status": "FINISHED",
+      "progressUpdateIntervalMs": 500
+    },
+    {
+      "text": "%sh\n\nwget 
https://s3.amazonaws.com/apache-zeppelin/tutorial/bank/bank.csv\nhadoop fs -put 
bank.csv .\n",
+      "user": "anonymous",
+      "dateUpdated": "Jan 22, 2017 12:51:48 PM",
+      "config": {
+        "colWidth": 12.0,
+        "enabled": true,
+        "results": {},
+        "editorSetting": {
+          "language": "text",
+          "editOnDblClick": false
+        },
+        "editorMode": "ace/mode/text"
+      },
+      "settings": {
+        "params": {},
+        "forms": {}
+      },
+      "results": {
+        "code": "SUCCESS",
+        "msg": [
+          {
+            "type": "TEXT",
+            "data": "--2017-01-22 12:51:48--  
https://s3.amazonaws.com/apache-zeppelin/tutorial/bank/bank.csv\nResolving 
s3.amazonaws.com... 52.216.80.227\nConnecting to 
s3.amazonaws.com|52.216.80.227|:443... connected.\nHTTP request sent, awaiting 
response... 200 OK\nLength: 461474 (451K) [application/octet-stream]\nSaving 
to: \u0027bank.csv.3\u0027\n\n     0K .......... .......... .......... 
.......... .......... 11%  141K 3s\n    50K .......... .......... .......... 
.......... .......... 22%  243K 2s\n   100K .......... .......... .......... 
.......... .......... 33%  449K 1s\n   150K .......... .......... .......... 
.......... .......... 44%  413K 1s\n   200K .......... .......... .......... 
.......... .......... 55%  746K 1s\n   250K .......... .......... .......... 
.......... .......... 66%  588K 0s\n   300K .......... .......... .......... 
.......... .......... 77%  840K 0s\n   350K .......... .......... .......... 
.......... .......... 88%  795K 0s\n   400K .......... ......
 .... .......... .......... .......... 99% 1.35M 0s\n   450K                    
                                   100% 13.2K\u003d1.1s\n\n2017-01-22 12:51:50 
(409 KB/s) - \u0027bank.csv.3\u0027 saved [461474/461474]\n\n17/01/22 12:51:51 
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your 
platform... using builtin-java classes where applicable\n"
+          }
+        ]
+      },
+      "apps": [],
+      "jobName": "paragraph_1485058437578_-1906301827",
+      "id": "20170122-121357_640055590",
+      "dateCreated": "Jan 22, 2017 12:13:57 PM",
+      "dateStarted": "Jan 22, 2017 12:51:48 PM",
+      "dateFinished": "Jan 22, 2017 12:51:52 PM",
       "status": "FINISHED",
       "progressUpdateIntervalMs": 500
     },
     {
       "text": "%pig\n\nbankText \u003d load \u0027bank.csv\u0027 using 
PigStorage(\u0027;\u0027);\nbank \u003d foreach bankText generate $0 as age, $1 
as job, $2 as marital, $3 as education, $5 as balance; \nbank \u003d filter 
bank by age !\u003d \u0027\"age\"\u0027;\nbank \u003d foreach bank generate 
(int)age, REPLACE(job,\u0027\"\u0027,\u0027\u0027) as job, REPLACE(marital, 
\u0027\"\u0027, \u0027\u0027) as marital, (int)(REPLACE(balance, 
\u0027\"\u0027, \u0027\u0027)) as balance;\n\n-- The following statement is 
optional, it depends on whether your needs.\n-- store bank into 
\u0027clean_bank.csv\u0027 using PigStorage(\u0027;\u0027);\n\n\n",
-      "user": "user1",
-      "dateUpdated": "Jan 6, 2017 3:57:11 PM",
+      "user": "anonymous",
+      "dateUpdated": "Jan 22, 2017 12:49:11 PM",
       "config": {
         "colWidth": 12.0,
         "editorMode": "ace/mode/pig",
@@ -102,15 +138,15 @@
       "jobName": "paragraph_1483277250237_-466604517",
       "id": "20161228-140640_1560978333",
       "dateCreated": "Jan 1, 2017 9:27:30 PM",
-      "dateStarted": "Jan 6, 2017 3:57:11 PM",
-      "dateFinished": "Jan 6, 2017 3:57:13 PM",
+      "dateStarted": "Jan 22, 2017 12:49:11 PM",
+      "dateFinished": "Jan 22, 2017 12:49:13 PM",
       "status": "FINISHED",
       "progressUpdateIntervalMs": 500
     },
     {
       "text": "%pig.query\n\nbank_data \u003d filter bank by age \u003c 30;\nb 
\u003d group bank_data by age;\nforeach b generate group, COUNT($1);\n\n",
-      "user": "user1",
-      "dateUpdated": "Jan 6, 2017 3:57:15 PM",
+      "user": "anonymous",
+      "dateUpdated": "Jan 22, 2017 12:49:16 PM",
       "config": {
         "colWidth": 4.0,
         "editorMode": "ace/mode/pig",
@@ -139,7 +175,7 @@
         "msg": [
           {
             "type": "TABLE",
-            "data": 
"group\tnull\n19\t4\n20\t3\n21\t7\n22\t9\n23\t20\n24\t24\n25\t44\n26\t77\n27\t94\n28\t103\n29\t97\n"
+            "data": 
"group\tcol_1\n19\t4\n20\t3\n21\t7\n22\t9\n23\t20\n24\t24\n25\t44\n26\t77\n27\t94\n28\t103\n29\t97\n"
           }
         ]
       },
@@ -147,15 +183,15 @@
       "jobName": "paragraph_1483277250238_-465450270",
       "id": "20161228-140730_1903342877",
       "dateCreated": "Jan 1, 2017 9:27:30 PM",
-      "dateStarted": "Jan 6, 2017 3:57:15 PM",
-      "dateFinished": "Jan 6, 2017 3:57:16 PM",
+      "dateStarted": "Jan 22, 2017 12:49:16 PM",
+      "dateFinished": "Jan 22, 2017 12:49:30 PM",
       "status": "FINISHED",
       "progressUpdateIntervalMs": 500
     },
     {
       "text": "%pig.query\n\nbank_data \u003d filter bank by age \u003c 
${maxAge\u003d40};\nb \u003d group bank_data by age;\nforeach b generate group, 
COUNT($1);",
-      "user": "user1",
-      "dateUpdated": "Jan 6, 2017 3:57:18 PM",
+      "user": "anonymous",
+      "dateUpdated": "Jan 22, 2017 12:49:18 PM",
       "config": {
         "colWidth": 4.0,
         "editorMode": "ace/mode/pig",
@@ -192,7 +228,7 @@
         "msg": [
           {
             "type": "TABLE",
-            "data": 
"group\tnull\n19\t4\n20\t3\n21\t7\n22\t9\n23\t20\n24\t24\n25\t44\n26\t77\n27\t94\n28\t103\n29\t97\n30\t150\n31\t199\n32\t224\n33\t186\n34\t231\n35\t180\n"
+            "data": 
"group\tcol_1\n19\t4\n20\t3\n21\t7\n22\t9\n23\t20\n24\t24\n25\t44\n26\t77\n27\t94\n28\t103\n29\t97\n30\t150\n31\t199\n32\t224\n33\t186\n34\t231\n35\t180\n"
           }
         ]
       },
@@ -200,15 +236,15 @@
       "jobName": "paragraph_1483277250239_-465835019",
       "id": "20161228-154918_1551591203",
       "dateCreated": "Jan 1, 2017 9:27:30 PM",
-      "dateStarted": "Jan 6, 2017 3:57:18 PM",
-      "dateFinished": "Jan 6, 2017 3:57:19 PM",
+      "dateStarted": "Jan 22, 2017 12:49:18 PM",
+      "dateFinished": "Jan 22, 2017 12:49:32 PM",
       "status": "FINISHED",
       "progressUpdateIntervalMs": 500
     },
     {
       "text": "%pig.query\n\nbank_data \u003d filter bank by 
marital\u003d\u003d\u0027${marital\u003dsingle,single|divorced|married}\u0027;\nb
 \u003d group bank_data by age;\nforeach b generate group, COUNT($1) as 
c;\n\n\n",
-      "user": "user1",
-      "dateUpdated": "Jan 6, 2017 3:57:24 PM",
+      "user": "anonymous",
+      "dateUpdated": "Jan 22, 2017 12:49:20 PM",
       "config": {
         "colWidth": 4.0,
         "editorMode": "ace/mode/pig",
@@ -264,8 +300,8 @@
       "jobName": "paragraph_1483277250240_-480070728",
       "id": "20161228-142259_575675591",
       "dateCreated": "Jan 1, 2017 9:27:30 PM",
-      "dateStarted": "Jan 6, 2017 3:57:20 PM",
-      "dateFinished": "Jan 6, 2017 3:57:20 PM",
+      "dateStarted": "Jan 22, 2017 12:49:30 PM",
+      "dateFinished": "Jan 22, 2017 12:49:34 PM",
       "status": "FINISHED",
       "progressUpdateIntervalMs": 500
     },
@@ -289,28 +325,10 @@
   "name": "Zeppelin Tutorial/Using Pig for querying data",
   "id": "2C57UKYWR",
   "angularObjects": {
-    "2C3DR183X:shared_process": [],
-    "2C5VH924X:shared_process": [],
-    "2C686X8ZH:shared_process": [],
-    "2C66Z9XPQ:shared_process": [],
-    "2C3JKFMJU:shared_process": [],
-    "2C69WE69N:shared_process": [],
     "2C3RWCVAG:shared_process": [],
-    "2C4HKDCQW:shared_process": [],
-    "2C4BJDRRZ:shared_process": [],
-    "2C6V3D44K:shared_process": [],
-    "2C3VECEG2:shared_process": [],
-    "2C5SRRXHM:shared_process": [],
-    "2C5DCRVGM:shared_process": [],
-    "2C66GE1VB:shared_process": [],
-    "2C3PTPMUH:shared_process": [],
-    "2C48Y7FSJ:shared_process": [],
-    "2C4ZD49PF:shared_process": [],
-    "2C63XW4XE:shared_process": [],
-    "2C4UB1UZA:shared_process": [],
-    "2C5S1R21W:shared_process": [],
-    "2C3SQSB7V:shared_process": []
+    "2C9KGCHDE:shared_process": [],
+    "2C8X2BS16:shared_process": []
   },
   "config": {},
   "info": {}
-}
+}
\ No newline at end of file

Reply via email to