Repository: accumulo-wikisearch
Updated Branches:
  refs/heads/master 5ab605107 -> 074fa7729


Updates to INSTALL.md with 1.8.0 upgrade


Project: http://git-wip-us.apache.org/repos/asf/accumulo-wikisearch/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/accumulo-wikisearch/commit/074fa772
Tree: http://git-wip-us.apache.org/repos/asf/accumulo-wikisearch/tree/074fa772
Diff: http://git-wip-us.apache.org/repos/asf/accumulo-wikisearch/diff/074fa772

Branch: refs/heads/master
Commit: 074fa7729ee837932dc1a665c98419e9a33fe622
Parents: 5ab6051
Author: Mike Miller <mmil...@apache.org>
Authored: Tue Dec 13 14:25:52 2016 -0500
Committer: Mike Miller <mmil...@apache.org>
Committed: Tue Dec 13 14:25:52 2016 -0500

----------------------------------------------------------------------
 INSTALL.md | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo-wikisearch/blob/074fa772/INSTALL.md
----------------------------------------------------------------------
diff --git a/INSTALL.md b/INSTALL.md
index fff2bc0..9a85105 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -27,7 +27,8 @@ Instructions for installing and running the Accumulo 
Wikisearch example.
         You will want to grab the files with the link name of 
pages-articles.xml.bz2. Though not strictly
         required, the ingest will go more quickly if the files are 
decompressed:
 
-        $ bunzip2 < enwiki-*-pages-articles.xml.bz2 | hadoop fs -put - 
/wikipedia/enwiki-pages-articles.xml
+        $ bunzip2 enwiki-*-pages-articles.xml.bz2
+        $ hadoop fs -put enwiki-*-pages-articles.xml 
/wikipedia/enwiki-pages-articles.xml
 
 ### Instructions
        
@@ -39,7 +40,7 @@ Instructions for installing and running the Accumulo 
Wikisearch example.
         $ cp wikipedia.xml.example wikipedia.xml
         $ vim wikipedia.xml
  
-1. Copy `ingest/lib/wikisearch-*.jar` and `ingest/lib/protobuf*.jar` to 
`$ACCUMULO_HOME/lib/ext`
+1. Copy `ingest/lib/wikisearch-*.jar` to `$ACCUMULO_HOME/lib/ext`
 1. Run `ingest/bin/ingest.sh` (or `ingest_parallel.sh` if running parallel 
version) with one
    argument (the name of the directory in HDFS where the wikipedia XML files 
reside) and this will
    kick off a MapReduce job to ingest the data into Accumulo.

Reply via email to