Repository: accumulo Updated Branches: refs/heads/gh-pages 97bc584b1 -> 535d261ed
http://git-wip-us.apache.org/repos/asf/accumulo/blob/535d261e/1.8/examples/_site/sample.html ---------------------------------------------------------------------- diff --git a/1.8/examples/_site/sample.html b/1.8/examples/_site/sample.html deleted file mode 100644 index 7880d65..0000000 --- a/1.8/examples/_site/sample.html +++ /dev/null @@ -1,198 +0,0 @@ -<h2 id="basic-sampling-example">Basic Sampling Example</h2> - -<p>Accumulo supports building a set of sample data that can be efficiently -accessed by scanners. What data is included in the sample set is configurable. -Below, some data representing documents are inserted.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>root@instance sampex> createtable sampex -root@instance sampex> insert 9255 doc content 'abcde' -root@instance sampex> insert 9255 doc url file://foo.txt -root@instance sampex> insert 8934 doc content 'accumulo scales' -root@instance sampex> insert 8934 doc url file://accumulo_notes.txt -root@instance sampex> insert 2317 doc content 'milk, eggs, bread, parmigiano-reggiano' -root@instance sampex> insert 2317 doc url file://groceries/9.txt -root@instance sampex> insert 3900 doc content 'EC2 ate my homework' -root@instance sampex> insert 3900 doc uril file://final_project.txt -</code></pre> -</div> - -<p>Below the table sampex is configured to build a sample set. The configuration -causes Accumulo to include any row where <code class="highlighter-rouge">murmur3_32(row) % 3 ==0</code> in the -tables sample data.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>root@instance sampex> config -t sampex -s table.sampler.opt.hasher=murmur3_32 -root@instance sampex> config -t sampex -s table.sampler.opt.modulus=3 -root@instance sampex> config -t sampex -s table.sampler=org.apache.accumulo.core.client.sample.RowSampler -</code></pre> -</div> - -<p>Below, attempting to scan the sample returns an error. This is because data -was inserted before the sample set was configured.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>root@instance sampex> scan --sample -2015-09-09 12:21:50,643 [shell.Shell] ERROR: org.apache.accumulo.core.client.SampleNotPresentException: Table sampex(ID:2) does not have sampling configured or built -</code></pre> -</div> - -<p>To remedy this problem, the following command will flush in memory data and -compact any files that do not contain the correct sample data.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>root@instance sampex> compact -t sampex --sf-no-sample -</code></pre> -</div> - -<p>After the compaction, the sample scan works.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>root@instance sampex> scan --sample -2317 doc:content [] milk, eggs, bread, parmigiano-reggiano -2317 doc:url [] file://groceries/9.txt -</code></pre> -</div> - -<p>The commands below show that updates to data in the sample are seen when -scanning the sample.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>root@instance sampex> insert 2317 doc content 'milk, eggs, bread, parmigiano-reggiano, butter' -root@instance sampex> scan --sample -2317 doc:content [] milk, eggs, bread, parmigiano-reggiano, butter -2317 doc:url [] file://groceries/9.txt -</code></pre> -</div> - -<p>Inorder to make scanning the sample fast, sample data is partitioned as data is -written to Accumulo. This means if the sample configuration is changed, that -data written previously is partitioned using a different criteria. Accumulo -will detect this situation and fail sample scans. The commands below show this -failure and fixiing the problem with a compaction.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>root@instance sampex> config -t sampex -s table.sampler.opt.modulus=2 -root@instance sampex> scan --sample -2015-09-09 12:22:51,058 [shell.Shell] ERROR: org.apache.accumulo.core.client.SampleNotPresentException: Table sampex(ID:2) does not have sampling configured or built -root@instance sampex> compact -t sampex --sf-no-sample -2015-09-09 12:23:07,242 [shell.Shell] INFO : Compaction of table sampex started for given range -root@instance sampex> scan --sample -2317 doc:content [] milk, eggs, bread, parmigiano-reggiano -2317 doc:url [] file://groceries/9.txt -3900 doc:content [] EC2 ate my homework -3900 doc:uril [] file://final_project.txt -9255 doc:content [] abcde -9255 doc:url [] file://foo.txt -</code></pre> -</div> - -<p>The example above is replicated in a java program using the Accumulo API. -Below is the program name and the command to run it.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>./bin/accumulo org.apache.accumulo.examples.simple.sample.SampleExample -i instance -z localhost -u root -p secret -</code></pre> -</div> - -<p>The commands below look under the hood to give some insight into how this -feature works. The commands determine what files the sampex table is using.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>root@instance sampex> tables -l -accumulo.metadata => !0 -accumulo.replication => +rep -accumulo.root => +r -sampex => 2 -trace => 1 -root@instance sampex> scan -t accumulo.metadata -c file -b 2 -e 2< -2< file:hdfs://localhost:10000/accumulo/tables/2/default_tablet/A000000s.rf [] 702,8 -</code></pre> -</div> - -<p>Below shows running <code class="highlighter-rouge">accumulo rfile-info</code> on the file above. This shows the -rfile has a normal default locality group and a sample default locality group. -The output also shows the configuration used to create the sample locality -group. The sample configuration within a rfile must match the tables sample -configuration for sample scan to work.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo rfile-info hdfs://localhost:10000/accumulo/tables/2/default_tablet/A000000s.rf -Reading file: hdfs://localhost:10000/accumulo/tables/2/default_tablet/A000000s.rf -RFile Version : 8 - -Locality group : <DEFAULT> - Start block : 0 - Num blocks : 1 - Index level 0 : 35 bytes 1 blocks - First key : 2317 doc:content [] 1437672014986 false - Last key : 9255 doc:url [] 1437672014875 false - Num entries : 8 - Column families : [doc] - -Sample Configuration : - Sampler class : org.apache.accumulo.core.client.sample.RowSampler - Sampler options : {hasher=murmur3_32, modulus=2} - -Sample Locality group : <DEFAULT> - Start block : 0 - Num blocks : 1 - Index level 0 : 36 bytes 1 blocks - First key : 2317 doc:content [] 1437672014986 false - Last key : 9255 doc:url [] 1437672014875 false - Num entries : 6 - Column families : [doc] - -Meta block : BCFile.index - Raw size : 4 bytes - Compressed size : 12 bytes - Compression type : gz - -Meta block : RFile.index - Raw size : 309 bytes - Compressed size : 176 bytes - Compression type : gz -</code></pre> -</div> - -<h2 id="shard-sampling-example">Shard Sampling Example</h2> - -<p><code class="highlighter-rouge">README.shard</code> shows how to index and search files using Accumulo. That -example indexes documents into a table named <code class="highlighter-rouge">shard</code>. The indexing scheme used -in that example places the document name in the column qualifier. A useful -sample of this indexing scheme should contain all data for any document in the -sample. To accomplish this, the following commands build a sample for the -shard table based on the column qualifier.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>root@instance shard> config -t shard -s table.sampler.opt.hasher=murmur3_32 -root@instance shard> config -t shard -s table.sampler.opt.modulus=101 -root@instance shard> config -t shard -s table.sampler.opt.qualifier=true -root@instance shard> config -t shard -s table.sampler=org.apache.accumulo.core.client.sample.RowColumnSampler -root@instance shard> compact -t shard --sf-no-sample -w -2015-07-23 15:00:09,280 [shell.Shell] INFO : Compacting table ... -2015-07-23 15:00:10,134 [shell.Shell] INFO : Compaction of table shard completed for given range -</code></pre> -</div> - -<p>After enabling sampling, the command below counts the number of documents in -the sample containing the words <code class="highlighter-rouge">import</code> and <code class="highlighter-rouge">int</code>.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query --sample -i instance16 -z localhost -t shard -u root -p secret import int | fgrep '.java' | wc - 11 11 1246 -</code></pre> -</div> - -<p>The command below counts the total number of documents containing the words -<code class="highlighter-rouge">import</code> and <code class="highlighter-rouge">int</code>.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance16 -z localhost -t shard -u root -p secret import int | fgrep '.java' | wc - 1085 1085 118175 -</code></pre> -</div> - -<p>The counts 11 out of 1085 total are around what would be expected for a modulus -of 101. Querying the sample first provides a quick way to estimate how much data -the real query will bring back.</p> - -<p>Another way sample data could be used with the shard example is with a -specialized iterator. In the examples source code there is an iterator named -CutoffIntersectingIterator. This iterator first checks how many documents are -found in the sample data. If too many documents are found in the sample data, -then it returns nothing. Otherwise it proceeds to query the full data set. -To experiment with this iterator, use the following command. The -<code class="highlighter-rouge">--sampleCutoff</code> option below will cause the query to return nothing if based -on the sample it appears a query would return more than 1000 documents.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query --sampleCutoff 1000 -i instance16 -z localhost -t shard -u root -p secret import int | fgrep '.java' | wc -</code></pre> -</div> http://git-wip-us.apache.org/repos/asf/accumulo/blob/535d261e/1.8/examples/_site/shard.html ---------------------------------------------------------------------- diff --git a/1.8/examples/_site/shard.html b/1.8/examples/_site/shard.html deleted file mode 100644 index 159aa58..0000000 --- a/1.8/examples/_site/shard.html +++ /dev/null @@ -1,61 +0,0 @@ -<p>Accumulo has an iterator called the intersecting iterator which supports querying a term index that is partitioned by -document, or âshardedâ. This example shows how to use the intersecting iterator through these four programs:</p> - -<ul> - <li>Index.java - Indexes a set of text files into an Accumulo table</li> - <li>Query.java - Finds documents containing a given set of terms.</li> - <li>Reverse.java - Reads the index table and writes a map of documents to terms into another table.</li> - <li>ContinuousQuery.java Uses the table populated by Reverse.java to select N random terms per document. Then it continuously and randomly queries those terms.</li> -</ul> - -<p>To run these example programs, create two tables like below.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>username@instance> createtable shard -username@instance shard> createtable doc2term -</code></pre> -</div> - -<p>After creating the tables, index some files. The following command indexes all of the java files in the Accumulo source code.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ cd /local/username/workspace/accumulo/ -$ find core/src server/src -name "*.java" | xargs ./bin/accumulo org.apache.accumulo.examples.simple.shard.Index -i instance -z zookeepers -t shard -u username -p password --partitions 30 -</code></pre> -</div> - -<p>The following command queries the index to find all files containing âfooâ and âbarâ.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ cd $ACCUMULO_HOME -$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance -z zookeepers -t shard -u username -p password foo bar -/local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/ColumnVisibilityTest.java -/local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/client/mock/MockConnectorTest.java -/local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/VisibilityEvaluatorTest.java -/local/username/workspace/accumulo/src/server/src/main/java/accumulo/test/functional/RowDeleteTest.java -/local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/logger/TestLogWriter.java -/local/username/workspace/accumulo/src/server/src/main/java/accumulo/test/functional/DeleteEverythingTest.java -/local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/data/KeyExtentTest.java -/local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/constraints/MetadataConstraintsTest.java -/local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/iterators/WholeRowIteratorTest.java -/local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/util/DefaultMapTest.java -/local/username/workspace/accumulo/src/server/src/test/java/accumulo/server/tabletserver/InMemoryMapTest.java -</code></pre> -</div> - -<p>In order to run ContinuousQuery, we need to run Reverse.java to populate doc2term.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Reverse -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password -</code></pre> -</div> - -<p>Below ContinuousQuery is run using 5 terms. So it selects 5 random terms from each document, then it continually -randomly selects one set of 5 terms and queries. It prints the number of matching documents and the time in seconds.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo org.apache.accumulo.examples.simple.shard.ContinuousQuery -i instance -z zookeepers --shardTable shard --doc2Term doc2term -u username -p password --terms 5 -[public, core, class, binarycomparable, b] 2 0.081 -[wordtodelete, unindexdocument, doctablename, putdelete, insert] 1 0.041 -[import, columnvisibilityinterpreterfactory, illegalstateexception, cv, columnvisibility] 1 0.049 -[getpackage, testversion, util, version, 55] 1 0.048 -[for, static, println, public, the] 55 0.211 -[sleeptime, wrappingiterator, options, long, utilwaitthread] 1 0.057 -[string, public, long, 0, wait] 12 0.132 -</code></pre> -</div> http://git-wip-us.apache.org/repos/asf/accumulo/blob/535d261e/1.8/examples/_site/tabletofile.html ---------------------------------------------------------------------- diff --git a/1.8/examples/_site/tabletofile.html b/1.8/examples/_site/tabletofile.html deleted file mode 100644 index bcf57c7..0000000 --- a/1.8/examples/_site/tabletofile.html +++ /dev/null @@ -1,47 +0,0 @@ -<p>This example uses mapreduce to extract specified columns from an existing table.</p> - -<p>To run this example you will need some data in a table. The following will -put a trivial amount of data into accumulo using the accumulo shell:</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -Shell - Apache Accumulo Interactive Shell -- version: 1.5.0 -- instance name: instance -- instance id: 00000000-0000-0000-0000-000000000000 -- -- type 'help' for a list of available commands -- -username@instance> createtable input -username@instance> insert dog cf cq dogvalue -username@instance> insert cat cf cq catvalue -username@instance> insert junk family qualifier junkvalue -username@instance> quit -</code></pre> -</div> - -<p>The TableToFile class configures a map-only job to read the specified columns and -write the key/value pairs to a file in HDFS.</p> - -<p>The following will extract the rows containing the column âcf:cqâ:</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output - -$ hadoop fs -ls /tmp/output --rw-r--r-- 1 username supergroup 0 2013-01-10 14:44 /tmp/output/_SUCCESS -drwxr-xr-x - username supergroup 0 2013-01-10 14:44 /tmp/output/_logs -drwxr-xr-x - username supergroup 0 2013-01-10 14:44 /tmp/output/_logs/history --rw-r--r-- 1 username supergroup 9049 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_1357847072863_username_TableToFile%5F1357847071434 --rw-r--r-- 1 username supergroup 26172 2013-01-10 14:44 /tmp/output/_logs/history/job_201301081658_0011_conf.xml --rw-r--r-- 1 username supergroup 50 2013-01-10 14:44 /tmp/output/part-m-00000 -</code></pre> -</div> - -<p>We can see the output of our little map-reduce job:</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ hadoop fs -text /tmp/output/output/part-m-00000 -catrow cf:cq [] catvalue -dogrow cf:cq [] dogvalue -$ -</code></pre> -</div> - http://git-wip-us.apache.org/repos/asf/accumulo/blob/535d261e/1.8/examples/_site/terasort.html ---------------------------------------------------------------------- diff --git a/1.8/examples/_site/terasort.html b/1.8/examples/_site/terasort.html deleted file mode 100644 index 9d6df70..0000000 --- a/1.8/examples/_site/terasort.html +++ /dev/null @@ -1,36 +0,0 @@ -<p>This example uses map/reduce to generate random input data that will -be sorted by storing it into accumulo. It uses data very similar to the -hadoop terasort benchmark.</p> - -<p>To run this example you run it with arguments describing the amount of data:</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \ --i instance -z zookeepers -u user -p password \ ---count 10 \ ---minKeySize 10 \ ---maxKeySize 10 \ ---minValueSize 78 \ ---maxValueSize 78 \ ---table sort \ ---splits 10 \ -</code></pre> -</div> - -<p>After the map reduce job completes, scan the data:</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>$ ./bin/accumulo shell -u username -p password -username@instance> scan -t sort -+l-$$OE/ZH c: 4 [] GGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOO -,C)wDw//u= c: 10 [] CCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKK -75@~?'WdUF c: 1 [] IIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQ -;L+!2rT~hd c: 8 [] MMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUU -LsS8)|.ZLD c: 5 [] OOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWW -M^*dDE;6^< c: 9 [] UUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCC -^Eu)<n#kdP c: 3 [] YYYYYYYYYYOOOOOOOOOOEEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGG -le5awB.$sm c: 6 [] WWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYYYYOOOOOOOOOOEEEEEEEE -q__[fwhKFg c: 7 [] EEEEEEEEEEUUUUUUUUUUKKKKKKKKKKAAAAAAAAAAQQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMM -w[o||:N&H, c: 2 [] QQQQQQQQQQGGGGGGGGGGWWWWWWWWWWMMMMMMMMMMCCCCCCCCCCSSSSSSSSSSIIIIIIIIIIYYYYYYYY -</code></pre> -</div> - -<p>Of course, a real benchmark would ingest millions of entries.</p> http://git-wip-us.apache.org/repos/asf/accumulo/blob/535d261e/1.8/examples/_site/visibility.html ---------------------------------------------------------------------- diff --git a/1.8/examples/_site/visibility.html b/1.8/examples/_site/visibility.html deleted file mode 100644 index c49ca7a..0000000 --- a/1.8/examples/_site/visibility.html +++ /dev/null @@ -1,129 +0,0 @@ -<h2 id="creating-a-new-user">Creating a new user</h2> - -<div class="highlighter-rouge"><pre class="highlight"><code>root@instance> createuser username -Enter new password for 'username': ******** -Please confirm new password for 'username': ******** -root@instance> user username -Enter password for user username: ******** -username@instance> createtable vistest -06 10:48:47,931 [shell.Shell] ERROR: org.apache.accumulo.core.client.AccumuloSecurityException: Error PERMISSION_DENIED - User does not have permission to perform this action -username@instance> userpermissions -System permissions: - -Table permissions (accumulo.metadata): Table.READ -username@instance> -</code></pre> -</div> - -<p>A user does not by default have permission to create a table.</p> - -<h2 id="granting-permissions-to-a-user">Granting permissions to a user</h2> - -<div class="highlighter-rouge"><pre class="highlight"><code>username@instance> user root -Enter password for user root: ******** -root@instance> grant -s System.CREATE_TABLE -u username -root@instance> user username -Enter password for user username: ******** -username@instance> createtable vistest -username@instance> userpermissions -System permissions: System.CREATE_TABLE - -Table permissions (accumulo.metadata): Table.READ -Table permissions (vistest): Table.READ, Table.WRITE, Table.BULK_IMPORT, Table.ALTER_TABLE, Table.GRANT, Table.DROP_TABLE -username@instance vistest> -</code></pre> -</div> - -<h2 id="inserting-data-with-visibilities">Inserting data with visibilities</h2> - -<p>Visibilities are boolean AND (&) and OR (|) combinations of authorization -tokens. Authorization tokens are arbitrary strings taken from a restricted -ASCII character set. Parentheses are required to specify order of operations -in visibilities.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest> insert row f1 q1 v1 -l A -username@instance vistest> insert row f2 q2 v2 -l A&B -username@instance vistest> insert row f3 q3 v3 -l apple&carrot|broccoli|spinach -06 11:19:01,432 [shell.Shell] ERROR: org.apache.accumulo.core.util.BadArgumentException: cannot mix | and & near index 12 -apple&carrot|broccoli|spinach - ^ -username@instance vistest> insert row f3 q3 v3 -l (apple&carrot)|broccoli|spinach -username@instance vistest> -</code></pre> -</div> - -<h2 id="scanning-with-authorizations">Scanning with authorizations</h2> - -<p>Authorizations are sets of authorization tokens. Each Accumulo user has -authorizations and each Accumulo scan has authorizations. Scan authorizations -are only allowed to be a subset of the userâs authorizations. By default, a -userâs authorizations set is empty.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest> scan -username@instance vistest> scan -s A -06 11:43:14,951 [shell.Shell] ERROR: java.lang.RuntimeException: org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_AUTHORIZATIONS - The user does not have the specified authorizations assigned -username@instance vistest> -</code></pre> -</div> - -<h2 id="setting-authorizations-for-a-user">Setting authorizations for a user</h2> - -<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest> setauths -s A -06 11:53:42,056 [shell.Shell] ERROR: org.apache.accumulo.core.client.AccumuloSecurityException: Error PERMISSION_DENIED - User does not have permission to perform this action -username@instance vistest> -</code></pre> -</div> - -<p>A user cannot set authorizations unless the user has the System.ALTER_USER permission. -The root user has this permission.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest> user root -Enter password for user root: ******** -root@instance vistest> setauths -s A -u username -root@instance vistest> user username -Enter password for user username: ******** -username@instance vistest> scan -s A -row f1:q1 [A] v1 -username@instance vistest> scan -row f1:q1 [A] v1 -username@instance vistest> -</code></pre> -</div> - -<p>The default authorizations for a scan are the userâs entire set of authorizations.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest> user root -Enter password for user root: ******** -root@instance vistest> setauths -s A,B,broccoli -u username -root@instance vistest> user username -Enter password for user username: ******** -username@instance vistest> scan -row f1:q1 [A] v1 -row f2:q2 [A&B] v2 -row f3:q3 [(apple&carrot)|broccoli|spinach] v3 -username@instance vistest> scan -s B -username@instance vistest> -</code></pre> -</div> - -<p>If you want, you can limit a user to only be able to insert data which they can read themselves. -It can be set with the following constraint.</p> - -<div class="highlighter-rouge"><pre class="highlight"><code>username@instance vistest> user root -Enter password for user root: ****** -root@instance vistest> config -t vistest -s table.constraint.1=org.apache.accumulo.core.security.VisibilityConstraint -root@instance vistest> user username -Enter password for user username: ******** -username@instance vistest> insert row f4 q4 v4 -l spinach - Constraint Failures: - ConstraintViolationSummary(constrainClass:org.apache.accumulo.core.security.VisibilityConstraint, violationCode:2, violationDescription:User does not have authorization on column visibility, numberOfViolatingMutations:1) -username@instance vistest> insert row f4 q4 v4 -l spinach|broccoli -username@instance vistest> scan -row f1:q1 [A] v1 -row f2:q2 [A&B] v2 -row f3:q3 [(apple&carrot)|broccoli|spinach] v3 -row f4:q4 [spinach|broccoli] v4 -username@instance vistest> -</code></pre> -</div> -