http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/7cc70b2e/_docs-unreleased/getting-started/design.md
----------------------------------------------------------------------
diff --git a/_docs-unreleased/getting-started/design.md 
b/_docs-unreleased/getting-started/design.md
new file mode 100644
index 0000000..251ae9f
--- /dev/null
+++ b/_docs-unreleased/getting-started/design.md
@@ -0,0 +1,165 @@
+---
+title: Accumulo Design
+category: getting-started
+order: 1
+---
+
+## Data Model
+
+Accumulo provides a richer data model than simple key-value stores, but is not 
a
+fully relational database. Data is represented as key-value pairs, where the 
key and
+value are comprised of the following elements:
+
+![key value pair]({{ site.url }}/images/docs/key_value.png)
+
+All elements of the Key and the Value are represented as byte arrays except for
+Timestamp, which is a Long. Accumulo sorts keys by element and 
lexicographically
+in ascending order. Timestamps are sorted in descending order so that later
+versions of the same Key appear first in a sequential scan. Tables consist of 
a set of
+sorted key-value pairs.
+
+## Architecture
+
+Accumulo is a distributed data storage and retrieval system and as such 
consists of
+several architectural components, some of which run on many individual servers.
+Much of the work Accumulo does involves maintaining certain properties of the
+data, such as organization, availability, and integrity, across many 
commodity-class
+machines.
+
+## Components
+
+An instance of Accumulo includes many TabletServers, one Garbage Collector 
process,
+one Master server and many Clients.
+
+### Tablet Server
+
+The TabletServer manages some subset of all the tablets (partitions of 
tables). This includes receiving writes from clients, persisting writes to a
+write-ahead log, sorting new key-value pairs in memory, periodically
+flushing sorted key-value pairs to new files in HDFS, and responding
+to reads from clients, forming a merge-sorted view of all keys and
+values from all the files it has created and the sorted in-memory
+store.
+
+TabletServers also perform recovery of a tablet
+that was previously on a server that failed, reapplying any writes
+found in the write-ahead log to the tablet.
+
+### Garbage Collector
+
+Accumulo processes will share files stored in HDFS. Periodically, the Garbage
+Collector will identify files that are no longer needed by any process, and
+delete them. Multiple garbage collectors can be run to provide hot-standby 
support.
+They will perform leader election among themselves to choose a single active 
instance.
+
+### Master
+
+The Accumulo Master is responsible for detecting and responding to TabletServer
+failure. It tries to balance the load across TabletServer by assigning tablets 
carefully
+and instructing TabletServers to unload tablets when necessary. The Master 
ensures all
+tablets are assigned to one TabletServer each, and handles table creation, 
alteration,
+and deletion requests from clients. The Master also coordinates startup, 
graceful
+shutdown and recovery of changes in write-ahead logs when Tablet servers fail.
+
+Multiple masters may be run. The masters will choose among themselves a single 
master,
+and the others will become backups if the master should fail.
+
+### Tracer
+
+The Accumulo Tracer process supports the distributed timing API provided by 
Accumulo.
+One to many of these processes can be run on a cluster which will write the 
timing
+information to a given Accumulo table for future reference. Seeing the section 
on
+Tracing for more information on this support.
+
+### Monitor
+
+The Accumulo Monitor is a web application that provides a wealth of 
information about
+the state of an instance. The Monitor shows graphs and tables which contain 
information
+about read/write rates, cache hit/miss rates, and Accumulo table information 
such as scan
+rate and active/queued compactions. Additionally, the Monitor should always be 
the first
+point of entry when attempting to debug an Accumulo problem as it will show 
high-level problems
+in addition to aggregated errors from all nodes in the cluster. See the 
[Accumulo monitor documentation][monitor]
+for more information.
+
+Multiple Monitors can be run to provide hot-standby support in the face of 
failure. Due to the
+forwarding of logs from remote hosts to the Monitor, only one Monitor process 
should be active
+at one time. Leader election will be performed internally to choose the active 
Monitor.
+
+### Client
+
+Accumulo includes a client library that is linked to every application. The 
client
+library contains logic for finding servers managing a particular tablet, and
+communicating with TabletServers to write and retrieve key-value pairs.
+
+## Data Management
+
+Accumulo stores data in tables, which are partitioned into tablets. Tablets are
+partitioned on row boundaries so that all of the columns and values for a 
particular
+row are found together within the same tablet. The Master assigns Tablets to 
one
+TabletServer at a time. This enables row-level transactions to take place 
without
+using distributed locking or some other complicated synchronization mechanism. 
As
+clients insert and query data, and as machines are added and removed from the
+cluster, the Master migrates tablets to ensure they remain available and that 
the
+ingest and query load is balanced across the cluster.
+
+![data distribution]({{ site.url }}/images/docs/data_distribution.png)
+
+## Tablet Service
+
+When a write arrives at a TabletServer it is written to a Write-Ahead Log and
+then inserted into a sorted data structure in memory called a MemTable. When 
the
+MemTable reaches a certain size, the TabletServer writes out the sorted
+key-value pairs to a file in HDFS called a Relative Key File (RFile), which is 
a
+kind of Indexed Sequential Access Method (ISAM) file. This process is called a
+minor compaction. A new MemTable is then created and the fact of the compaction
+is recorded in the Write-Ahead Log.
+
+When a request to read data arrives at a TabletServer, the TabletServer does a
+binary search across the MemTable as well as the in-memory indexes associated
+with each RFile to find the relevant values. If clients are performing a scan,
+several key-value pairs are returned to the client in order from the MemTable
+and the set of RFiles by performing a merge-sort as they are read.
+
+## Compactions
+
+In order to manage the number of files per tablet, periodically the 
TabletServer
+performs Major Compactions of files within a tablet, in which some set of 
RFiles
+are combined into one file. The previous files will eventually be removed by 
the
+Garbage Collector. This also provides an opportunity to permanently remove
+deleted key-value pairs by omitting key-value pairs suppressed by a delete 
entry
+when the new file is created.
+
+## Splitting
+
+When a table is created it has one tablet. As the table grows its initial
+tablet eventually splits into two tablets. Its likely that one of these
+tablets will migrate to another tablet server. As the table continues to grow,
+its tablets will continue to split and be migrated. The decision to
+automatically split a tablet is based on the size of a tablets files. The
+size threshold at which a tablet splits is configurable per table. In addition
+to automatic splitting, a user can manually add split points to a table to
+create new tablets. Manually splitting a new table can parallelize reads and
+writes giving better initial performance without waiting for automatic
+splitting.
+
+As data is deleted from a table, tablets may shrink. Over time this can lead
+to small or empty tablets. To deal with this, merging of tablets was
+introduced in Accumulo 1.4. This is discussed in more detail later.
+
+## Fault-Tolerance
+
+If a TabletServer fails, the Master detects it and automatically reassigns the 
tablets
+assigned from the failed server to other servers. Any key-value pairs that 
were in
+memory at the time the TabletServer fails are automatically reapplied from the 
Write-Ahead
+Log(WAL) to prevent any loss of data.
+
+Tablet servers write their WALs directly to HDFS so the logs are available to 
all tablet
+servers for recovery. To make the recovery process efficient, the updates 
within a log are
+grouped by tablet.  TabletServers can quickly apply the mutations from the 
sorted logs
+that are destined for the tablets they have now been assigned.
+
+TabletServer failures are noted on the Master's monitor page, accessible via
+`http://master-address:9995/monitor`.
+
+![failure handling]({{ site.url }}/images/docs/failure_handling.png)
+
+[monitor]: {{page.docs_baseurl}}/administration/overview#monitoring

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/7cc70b2e/_docs-unreleased/getting-started/shell.md
----------------------------------------------------------------------
diff --git a/_docs-unreleased/getting-started/shell.md 
b/_docs-unreleased/getting-started/shell.md
new file mode 100644
index 0000000..7fa56d9
--- /dev/null
+++ b/_docs-unreleased/getting-started/shell.md
@@ -0,0 +1,145 @@
+---
+title: Accumulo Shell
+category: getting-started
+order: 3
+---
+
+Accumulo provides a simple shell that can be used to examine the contents and
+configuration settings of tables, insert/update/delete values, and change
+configuration settings.
+
+The shell can be started by the following command:
+
+    accumulo shell -u [username]
+
+The shell will prompt for the corresponding password to the username specified
+and then display the following prompt:
+
+    Shell - Apache Accumulo Interactive Shell
+    -
+    - version: 2.x.x
+    - instance name: myinstance
+    - instance id: 00000000-0000-0000-0000-000000000000
+    -
+    - type 'help' for a list of available commands
+    -
+    root@myinstance>
+
+## Basic Administration
+
+The Accumulo shell can be used to create and delete tables, as well as to 
configure
+table and instance specific options.
+
+```
+root@myinstance> tables
+accumulo.metadata
+accumulo.root
+
+root@myinstance> createtable mytable
+
+root@myinstance mytable>
+
+root@myinstance mytable> tables
+accumulo.metadata
+accumulo.root
+mytable
+
+root@myinstance mytable> createtable testtable
+
+root@myinstance testtable>
+
+root@myinstance testtable> deletetable testtable
+deletetable { testtable } (yes|no)? yes
+Table: [testtable] has been deleted.
+
+root@myinstance>
+```
+
+The Shell can also be used to insert updates and scan tables. This is useful 
for
+inspecting tables.
+
+    root@myinstance mytable> scan
+
+    root@myinstance mytable> insert row1 colf colq value1
+    insert successful
+
+    root@myinstance mytable> scan
+    row1 colf:colq [] value1
+
+The value in brackets `[]` would be the visibility labels. Since none were 
used, this is empty for this row.
+You can use the `-st` option to scan to see the timestamp for the cell, too.
+
+## Table Maintenance
+
+The *compact* command instructs Accumulo to schedule a compaction of the table 
during which
+files are consolidated and deleted entries are removed.
+
+    root@myinstance mytable> compact -t mytable
+    07 16:13:53,201 [shell.Shell] INFO : Compaction of table mytable started 
for given range
+
+The *flush* command instructs Accumulo to write all entries currently in 
memory for a given table
+to disk.
+
+    root@myinstance mytable> flush -t mytable
+    07 16:14:19,351 [shell.Shell] INFO : Flush of table mytable
+    initiated...
+
+## User Administration
+
+The Shell can be used to add, remove, and grant privileges to users.
+
+```
+root@myinstance mytable> createuser bob
+Enter new password for 'bob': *********
+Please confirm new password for 'bob': *********
+
+root@myinstance mytable> authenticate bob
+Enter current password for 'bob': *********
+Valid
+
+root@myinstance mytable> grant System.CREATE_TABLE -s -u bob
+
+root@myinstance mytable> user bob
+Enter current password for 'bob': *********
+
+bob@myinstance mytable> userpermissions
+System permissions: System.CREATE_TABLE
+Table permissions (accumulo.metadata): Table.READ
+Table permissions (mytable): NONE
+
+bob@myinstance mytable> createtable bobstable
+
+bob@myinstance bobstable>
+
+bob@myinstance bobstable> user root
+Enter current password for 'root': *********
+
+root@myinstance bobstable> revoke System.CREATE_TABLE -s -u bob
+```
+
+## JSR-223 Support in the Shell
+
+The script command can be used to invoke programs written in languages 
supported by installed JSR-223
+engines. You can get a list of installed engines with the -l argument. Below 
is an example of the output
+of the command when running the Shell with Java 7.
+
+    root@fake> script -l
+        Engine Alias: ECMAScript
+        Engine Alias: JavaScript
+        Engine Alias: ecmascript
+        Engine Alias: javascript
+        Engine Alias: js
+        Engine Alias: rhino
+        Language: ECMAScript (1.8)
+        Script Engine: Mozilla Rhino (1.7 release 3 PRERELEASE)
+    ScriptEngineFactory Info
+
+A list of compatible languages can be found on [this 
page](https://en.wikipedia.org/wiki/List_of_JVM_languages). The
+rhino javascript engine is provided with the JVM. Typically putting a jar on 
the classpath is all that is
+needed to install a new engine.
+
+ When writing scripts to run in the shell, you will have a variable called 
connection already available
+to you. This variable is a reference to an Accumulo Connector object, the same 
connection that the Shell
+is using to communicate with the Accumulo servers. At this point you can use 
any of the public API methods
+within your script. Reference the script command help to see all of the 
execution options. Script and script
+invocation examples can be found in ACCUMULO-1399.

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/7cc70b2e/_docs-unreleased/getting-started/table_configuration.md
----------------------------------------------------------------------
diff --git a/_docs-unreleased/getting-started/table_configuration.md 
b/_docs-unreleased/getting-started/table_configuration.md
new file mode 100644
index 0000000..e86a1fb
--- /dev/null
+++ b/_docs-unreleased/getting-started/table_configuration.md
@@ -0,0 +1,664 @@
+---
+title: Table Configuration
+category: getting-started
+order: 5
+---
+
+Accumulo tables have a few options that can be configured to alter the default
+behavior of Accumulo as well as improve performance based on the data stored.
+These include locality groups, constraints, bloom filters, iterators, and block
+cache.  See the [configuration documentation][config] for a complete list of
+available configuration options.
+
+## Locality Groups
+
+Accumulo supports storing sets of column families separately on disk to allow
+clients to efficiently scan over columns that are frequently used together and 
to avoid
+scanning over column families that are not requested. After a locality group 
is set,
+Scanner and BatchScanner operations will automatically take advantage of them
+whenever the fetchColumnFamilies() method is used.
+
+By default, tables place all column families into the same ``default'' 
locality group.
+Additional locality groups can be configured at any time via the shell or
+programmatically as follows:
+
+### Managing Locality Groups via the Shell
+
+    usage: setgroups <group>=<col fam>{,<col fam>}{ <group>=<col fam>{,<col 
fam>}}
+        [-?] -t <table>
+
+    user@myinstance mytable> setgroups group_one=colf1,colf2 -t mytable
+
+    user@myinstance mytable> getgroups -t mytable
+
+### Managing Locality Groups via the Client API
+
+```java
+Connector conn;
+
+HashMap<String,Set<Text>> localityGroups = new HashMap<String, Set<Text>>();
+
+HashSet<Text> metadataColumns = new HashSet<Text>();
+metadataColumns.add(new Text("domain"));
+metadataColumns.add(new Text("link"));
+
+HashSet<Text> contentColumns = new HashSet<Text>();
+contentColumns.add(new Text("body"));
+contentColumns.add(new Text("images"));
+
+localityGroups.put("metadata", metadataColumns);
+localityGroups.put("content", contentColumns);
+
+conn.tableOperations().setLocalityGroups("mytable", localityGroups);
+
+// existing locality groups can be obtained as follows
+Map<String, Set<Text>> groups =
+    conn.tableOperations().getLocalityGroups("mytable");
+```
+
+The assignment of Column Families to Locality Groups can be changed at any 
time. The
+physical movement of column families into their new locality groups takes 
place via
+the periodic Major Compaction process that takes place continuously in the
+background. Major Compaction can also be scheduled to take place immediately
+through the shell:
+
+    user@myinstance mytable> compact -t mytable
+
+## Constraints
+
+Accumulo supports constraints applied on mutations at insert time. This can be
+used to disallow certain inserts according to a user defined policy. Any 
mutation
+that fails to meet the requirements of the constraint is rejected and sent 
back to the
+client.
+
+Constraints can be enabled by setting a table property as follows:
+
+```
+user@myinstance mytable> constraint -t mytable -a com.test.ExampleConstraint 
com.test.AnotherConstraint
+
+user@myinstance mytable> constraint -l
+com.test.ExampleConstraint=1
+com.test.AnotherConstraint=2
+```
+
+Currently there are no general-purpose constraints provided with the Accumulo
+distribution. New constraints can be created by writing a Java class that 
implements
+the following interface:
+
+     org.apache.accumulo.core.constraints.Constraint
+
+To deploy a new constraint, create a jar file containing the class 
implementing the
+new constraint and place it in the lib directory of the Accumulo installation. 
New
+constraint jars can be added to Accumulo and enabled without restarting but any
+change to an existing constraint class requires Accumulo to be restarted.
+
+See the [contraints 
examples](https://github.com/apache/accumulo-examples/blob/master/docs/contraints.md)
+for example code.
+
+## Bloom Filters
+
+As mutations are applied to an Accumulo table, several files are created per 
tablet. If
+bloom filters are enabled, Accumulo will create and load a small data 
structure into
+memory to determine whether a file contains a given key before opening the 
file.
+This can speed up lookups considerably.
+
+To enable bloom filters, enter the following command in the Shell:
+
+    user@myinstance> config -t mytable -s table.bloom.enabled=true
+
+The [bloom filter 
examples](https://github.com/apache/accumulo-examples/blob/master/docs/bloom.md)
+contains an extensive example of using Bloom Filters.
+
+## Iterators
+
+Iterators provide a modular mechanism for adding functionality to be executed 
by
+TabletServers when scanning or compacting data. This allows users to 
efficiently
+summarize, filter, and aggregate data. In fact, the built-in features of 
cell-level
+security and column fetching are implemented using Iterators.
+Some useful Iterators are provided with Accumulo and can be found in the
+*`org.apache.accumulo.core.iterators.user`* package.
+In each case, any custom Iterators must be included in Accumulo's classpath,
+typically by including a jar in `lib/` or `lib/ext/`, although the VFS 
classloader
+allows for classpath manipulation using a variety of schemes including URLs 
and HDFS URIs.
+
+### Setting Iterators via the Shell
+
+Iterators can be configured on a table at scan, minor compaction and/or major
+compaction scopes. If the Iterator implements the OptionDescriber interface, 
the
+setiter command can be used which will interactively prompt the user to provide
+values for the given necessary options.
+
+    usage: setiter [-?] -ageoff | -agg | -class <name> | -regex |
+        -reqvis | -vers   [-majc] [-minc] [-n <itername>] -p <pri>
+        [-scan] [-t <table>]
+
+    user@myinstance mytable> setiter -t mytable -scan -p 15 -n myiter -class 
com.company.MyIterator
+
+The config command can always be used to manually configure iterators which is 
useful
+in cases where the Iterator does not implement the OptionDescriber interface.
+
+    config -t mytable -s table.iterator.scan.myiter=15,com.company.MyIterator
+    config -t mytable -s table.iterator.minc.myiter=15,com.company.MyIterator
+    config -t mytable -s table.iterator.majc.myiter=15,com.company.MyIterator
+    config -t mytable -s 
table.iterator.scan.myiter.opt.myoptionname=myoptionvalue
+    config -t mytable -s 
table.iterator.minc.myiter.opt.myoptionname=myoptionvalue
+    config -t mytable -s 
table.iterator.majc.myiter.opt.myoptionname=myoptionvalue
+
+Typically, a table will have multiple iterators. Accumulo configures a set of
+system level iterators for each table. These iterators provide core
+functionality like visibility label filtering and may not be removed by
+users. User level iterators are applied in the order of their priority.
+Priority is a user configured integer; iterators with lower numbers go first,
+passing the results of their iteration on to the other iterators up the
+stack.
+
+### Setting Iterators Programmatically
+
+```java
+scanner.addIterator(new IteratorSetting(
+    15, // priority
+    "myiter", // name this iterator
+    "com.company.MyIterator" // class name
+));
+```
+
+Some iterators take additional parameters from client code, as in the following
+example:
+
+```java
+IteratorSetting iter = new IteratorSetting(...);
+iter.addOption("myoptionname", "myoptionvalue");
+scanner.addIterator(iter)
+```
+
+Tables support separate Iterator settings to be applied at scan time, upon 
minor
+compaction and upon major compaction. For most uses, tables will have identical
+iterator settings for all three to avoid inconsistent results.
+
+### Versioning Iterators and Timestamps
+
+Accumulo provides the capability to manage versioned data through the use of
+timestamps within the Key. If a timestamp is not specified in the key created 
by the
+client then the system will set the timestamp to the current time. Two keys 
with
+identical rowIDs and columns but different timestamps are considered two 
versions
+of the same key. If two inserts are made into Accumulo with the same rowID,
+column, and timestamp, then the behavior is non-deterministic.
+
+Timestamps are sorted in descending order, so the most recent data comes first.
+Accumulo can be configured to return the top k versions, or versions later 
than a
+given date. The default is to return the one most recent version.
+
+The version policy can be changed by changing the VersioningIterator options 
for a
+table as follows:
+
+```
+user@myinstance mytable> config -t mytable -s 
table.iterator.scan.vers.opt.maxVersions=3
+
+user@myinstance mytable> config -t mytable -s 
table.iterator.minc.vers.opt.maxVersions=3
+
+user@myinstance mytable> config -t mytable -s 
table.iterator.majc.vers.opt.maxVersions=3
+```
+
+When a table is created, by default its configured to use the
+VersioningIterator and keep one version. A table can be created without the
+VersioningIterator with the -ndi option in the shell. Also the Java API
+has the following method
+
+```java
+connector.tableOperations.create(String tableName, boolean limitVersion);
+```
+
+#### Logical Time
+
+Accumulo 1.2 introduces the concept of logical time. This ensures that 
timestamps
+set by Accumulo always move forward. This helps avoid problems caused by
+TabletServers that have different time settings. The per tablet counter gives 
unique
+one up time stamps on a per mutation basis. When using time in milliseconds, if
+two things arrive within the same millisecond then both receive the same
+timestamp. When using time in milliseconds, Accumulo set times will still
+always move forward and never backwards.
+
+A table can be configured to use logical timestamps at creation time as 
follows:
+
+    user@myinstance> createtable -tl logical
+
+#### Deletes
+
+Deletes are special keys in Accumulo that get sorted along will all the other 
data.
+When a delete key is inserted, Accumulo will not show anything that has a
+timestamp less than or equal to the delete key. During major compaction, any 
keys
+older than a delete key are omitted from the new file created, and the omitted 
keys
+are removed from disk as part of the regular garbage collection process.
+
+### Filters
+
+When scanning over a set of key-value pairs it is possible to apply an 
arbitrary
+filtering policy through the use of a Filter. Filters are types of iterators 
that return
+only key-value pairs that satisfy the filter logic. Accumulo has a few 
built-in filters
+that can be configured on any table: AgeOff, ColumnAgeOff, Timestamp, NoVis, 
and RegEx. More can be added
+by writing a Java class that extends the
+`org.apache.accumulo.core.iterators.Filter` class.
+
+The AgeOff filter can be configured to remove data older than a certain date 
or a fixed
+amount of time from the present. The following example sets a table to delete
+everything inserted over 30 seconds ago:
+
+```
+user@myinstance> createtable filtertest
+
+user@myinstance filtertest> setiter -t filtertest -scan -minc -majc -p 10 -n 
myfilter -ageoff
+AgeOffFilter removes entries with timestamps more than <ttl> milliseconds old
+----------> set org.apache.accumulo.core.iterators.user.AgeOffFilter parameter 
negate, default false
+                keeps k/v that pass accept method, true rejects k/v that pass 
accept method:
+----------> set org.apache.accumulo.core.iterators.user.AgeOffFilter parameter 
ttl, time to
+                live (milliseconds): 30000
+----------> set org.apache.accumulo.core.iterators.user.AgeOffFilter parameter 
currentTime, if set,
+                use the given value as the absolute time in milliseconds as 
the current time of day:
+
+user@myinstance filtertest>
+
+user@myinstance filtertest> scan
+
+user@myinstance filtertest> insert foo a b c
+
+user@myinstance filtertest> scan
+foo a:b [] c
+
+user@myinstance filtertest> sleep 4
+
+user@myinstance filtertest> scan
+
+user@myinstance filtertest>
+```
+
+To see the iterator settings for a table, use:
+
+    user@example filtertest> config -t filtertest -f iterator
+    ---------+---------------------------------------------+------------------
+    SCOPE    | NAME                                        | VALUE
+    ---------+---------------------------------------------+------------------
+    table    | table.iterator.majc.myfilter .............. | 
10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+    table    | table.iterator.majc.myfilter.opt.ttl ...... | 30000
+    table    | table.iterator.majc.vers .................. | 
20,org.apache.accumulo.core.iterators.VersioningIterator
+    table    | table.iterator.majc.vers.opt.maxVersions .. | 1
+    table    | table.iterator.minc.myfilter .............. | 
10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+    table    | table.iterator.minc.myfilter.opt.ttl ...... | 30000
+    table    | table.iterator.minc.vers .................. | 
20,org.apache.accumulo.core.iterators.VersioningIterator
+    table    | table.iterator.minc.vers.opt.maxVersions .. | 1
+    table    | table.iterator.scan.myfilter .............. | 
10,org.apache.accumulo.core.iterators.user.AgeOffFilter
+    table    | table.iterator.scan.myfilter.opt.ttl ...... | 30000
+    table    | table.iterator.scan.vers .................. | 
20,org.apache.accumulo.core.iterators.VersioningIterator
+    table    | table.iterator.scan.vers.opt.maxVersions .. | 1
+    ---------+---------------------------------------------+------------------
+
+### Combiners
+
+Accumulo supports on the fly lazy aggregation of data using Combiners. 
Aggregation is
+done at compaction and scan time. No lookup is done at insert time, which` 
greatly
+speeds up ingest.
+
+Accumulo allows Combiners to be configured on tables and column
+families. When a Combiner is set it is applied across the values
+associated with any keys that share rowID, column family, and column qualifier.
+This is similar to the reduce step in MapReduce, which applied some function 
to all
+the values associated with a particular key.
+
+For example, if a summing combiner were configured on a table and the following
+mutations were inserted:
+
+    Row     Family Qualifier Timestamp  Value
+    rowID1  colfA  colqA     20100101   1
+    rowID1  colfA  colqA     20100102   1
+
+The table would reflect only one aggregate value:
+
+    rowID1  colfA  colqA     -          2
+
+Combiners can be enabled for a table using the setiter command in the shell. 
Below is an example.
+
+```
+root@a14 perDayCounts> setiter -t perDayCounts -p 10 -scan -minc -majc -n 
daycount
+                       -class 
org.apache.accumulo.core.iterators.user.SummingCombiner
+TypedValueCombiner can interpret Values as a variety of number encodings
+  (VLong, Long, or String) before combining
+----------> set SummingCombiner parameter columns,
+            <col fam>[:<col qual>]{,<col fam>[:<col qual>]} : day
+----------> set SummingCombiner parameter type, <VARNUM|LONG|STRING>: STRING
+
+root@a14 perDayCounts> insert foo day 20080101 1
+root@a14 perDayCounts> insert foo day 20080101 1
+root@a14 perDayCounts> insert foo day 20080103 1
+root@a14 perDayCounts> insert bar day 20080101 1
+root@a14 perDayCounts> insert bar day 20080101 1
+
+root@a14 perDayCounts> scan
+bar day:20080101 []    2
+foo day:20080101 []    2
+foo day:20080103 []    1
+```
+
+Accumulo includes some useful Combiners out of the box. To find these look in
+the *`org.apache.accumulo.core.iterators.user`* package.
+
+Additional Combiners can be added by creating a Java class that extends
+`org.apache.accumulo.core.iterators.Combiner` and adding a jar containing that
+class to Accumulo's lib/ext directory.
+
+See the [combiner 
example](https://github.com/apache/accumulo-examples/blob/master/docs/combiner.md)
+for example code.
+
+## Block Cache
+
+In order to increase throughput of commonly accessed entries, Accumulo employs 
a block cache.
+This block cache buffers data in memory so that it doesn't have to be read off 
of disk.
+The RFile format that Accumulo prefers is a mix of index blocks and data 
blocks, where the index blocks are used to find the appropriate data blocks.
+Typical queries to Accumulo result in a binary search over several index 
blocks followed by a linear scan of one or more data blocks.
+
+The block cache can be configured on a per-table basis, and all tablets hosted 
on a tablet server share a single resource pool.
+To configure the size of the tablet server's block cache, set the following 
properties:
+
+    tserver.cache.data.size: Specifies the size of the cache for file data 
blocks.
+    tserver.cache.index.size: Specifies the size of the cache for file indices.
+
+To enable the block cache for your table, set the following properties:
+
+    table.cache.block.enable: Determines whether file (data) block cache is 
enabled.
+    table.cache.index.enable: Determines whether index cache is enabled.
+
+The block cache can have a significant effect on alleviating hot spots, as 
well as reducing query latency.
+It is enabled by default for the metadata tables.
+
+## Compaction
+
+As data is written to Accumulo it is buffered in memory. The data buffered in
+memory is eventually written to HDFS on a per tablet basis. Files can also be
+added to tablets directly by bulk import. In the background tablet servers run
+major compactions to merge multiple files into one. The tablet server has to
+decide which tablets to compact and which files within a tablet to compact.
+This decision is made using the compaction ratio, which is configurable on a
+per table basis. To configure this ratio modify the following property:
+
+    table.compaction.major.ratio
+
+Increasing this ratio will result in more files per tablet and less compaction
+work. More files per tablet means more higher query latency. So adjusting
+this ratio is a trade off between ingest and query performance. The ratio
+defaults to 3.
+
+The way the ratio works is that a set of files is compacted into one file if 
the
+sum of the sizes of the files in the set is larger than the ratio multiplied by
+the size of the largest file in the set. If this is not true for the set of all
+files in a tablet, the largest file is removed from consideration, and the
+remaining files are considered for compaction. This is repeated until a
+compaction is triggered or there are no files left to consider.
+
+The number of background threads tablet servers use to run major compactions is
+configurable. To configure this modify the following property:
+
+    tserver.compaction.major.concurrent.max
+
+Also, the number of threads tablet servers use for minor compactions is
+configurable. To configure this modify the following property:
+
+    tserver.compaction.minor.concurrent.max
+
+The numbers of minor and major compactions running and queued is visible on the
+Accumulo monitor page. This allows you to see if compactions are backing up
+and adjustments to the above settings are needed. When adjusting the number of
+threads available for compactions, consider the number of cores and other tasks
+running on the nodes such as maps and reduces.
+
+If major compactions are not keeping up, then the number of files per tablet
+will grow to a point such that query performance starts to suffer. One way to
+handle this situation is to increase the compaction ratio. For example, if the
+compaction ratio were set to 1, then every new file added to a tablet by minor
+compaction would immediately queue the tablet for major compaction. So if a
+tablet has a 200M file and minor compaction writes a 1M file, then the major
+compaction will attempt to merge the 200M and 1M file. If the tablet server
+has lots of tablets trying to do this sort of thing, then major compactions
+will back up and the number of files per tablet will start to grow, assuming
+data is being continuously written. Increasing the compaction ratio will
+alleviate backups by lowering the amount of major compaction work that needs to
+be done.
+
+Another option to deal with the files per tablet growing too large is to adjust
+the following property:
+
+    table.file.max
+
+When a tablet reaches this number of files and needs to flush its in-memory
+data to disk, it will choose to do a merging minor compaction. A merging minor
+compaction will merge the tablet's smallest file with the data in memory at
+minor compaction time. Therefore the number of files will not grow beyond this
+limit. This will make minor compactions take longer, which will cause ingest
+performance to decrease. This can cause ingest to slow down until major
+compactions have enough time to catch up. When adjusting this property, also
+consider adjusting the compaction ratio. Ideally, merging minor compactions
+never need to occur and major compactions will keep up. It is possible to
+configure the file max and compaction ratio such that only merging minor
+compactions occur and major compactions never occur. This should be avoided
+because doing only merging minor compactions causes O(_N_^2^) work to be done.
+The amount of work done by major compactions is O(_N_*log~_R_~(_N_)) where
+_R_ is the compaction ratio.
+
+Compactions can be initiated manually for a table. To initiate a minor
+compaction, use the flush command in the shell. To initiate a major compaction,
+use the compact command in the shell. The compact command will compact all
+tablets in a table to one file. Even tablets with one file are compacted. This
+is useful for the case where a major compaction filter is configured for a
+table. In 1.4 the ability to compact a range of a table was added. To use this
+feature specify start and stop rows for the compact command. This will only
+compact tablets that overlap the given row range.
+
+### Compaction Strategies
+
+The default behavior of major compactions is defined in the class 
DefaultCompactionStrategy. 
+This behavior can be changed by overriding the following property with a fully 
qualified class name:
+
+    table.majc.compaction.strategy
+
+Custom compaction strategies can have additional properties that are specified 
following the prefix property:
+
+    table.majc.compaction.strategy.opts.*
+
+Accumulo provides a few classes that can be used as an alternative compaction 
strategy. These classes are located in the 
+org.apache.accumulo.tserver.compaction.* package. EverythingCompactionStrategy 
will simply compact all files. This is the 
+strategy used by the user "compact" command. SizeLimitCompactionStrategy 
compacts files no bigger than the limit set in the
+property table.majc.compaction.strategy.opts.sizeLimit. 
+
+TwoTierCompactionStrategy is a hybrid compaction strategy that supports two 
types of compression. If the total size of 
+files being compacted is larger than 
table.majc.compaction.strategy.opts.file.large.compress.threshold than a larger 
+compression type will be used. The larger compression type is specified in 
table.majc.compaction.strategy.opts.file.large.compress.type. 
+Otherwise, the configured table compression will be used. To use this strategy 
with minor compactions set table.file.compress.type=snappy 
+and set a different compress type in 
table.majc.compaction.strategy.opts.file.large.compress.type for larger files.
+
+## Pre-splitting tables
+
+Accumulo will balance and distribute tables across servers. Before a
+table gets large, it will be maintained as a single tablet on a single
+server. This limits the speed at which data can be added or queried
+to the speed of a single node. To improve performance when the a table
+is new, or small, you can add split points and generate new tablets.
+
+In the shell:
+
+    root@myinstance> createtable newTable
+    root@myinstance> addsplits -t newTable g n t
+
+This will create a new table with 4 tablets. The table will be split
+on the letters ``g'', ``n'', and ``t'' which will work nicely if the
+row data start with lower-case alphabetic characters. If your row
+data includes binary information or numeric information, or if the
+distribution of the row information is not flat, then you would pick
+different split points. Now ingest and query can proceed on 4 nodes
+which can improve performance.
+
+## Merging tablets
+
+Over time, a table can get very large, so large that it has hundreds
+of thousands of split points. Once there are enough tablets to spread
+a table across the entire cluster, additional splits may not improve
+performance, and may create unnecessary bookkeeping. The distribution
+of data may change over time. For example, if row data contains date
+information, and data is continually added and removed to maintain a
+window of current information, tablets for older rows may be empty.
+
+Accumulo supports tablet merging, which can be used to reduce
+the number of split points. The following command will merge all rows
+from ``A'' to ``Z'' into a single tablet:
+
+    root@myinstance> merge -t myTable -s A -e Z
+
+If the result of a merge produces a tablet that is larger than the
+configured split size, the tablet may be split by the tablet server.
+Be sure to increase your tablet size prior to any merges if the goal
+is to have larger tablets:
+
+    root@myinstance> config -t myTable -s table.split.threshold=2G
+
+In order to merge small tablets, you can ask Accumulo to merge
+sections of a table smaller than a given size.
+
+    root@myinstance> merge -t myTable -s 100M
+
+By default, small tablets will not be merged into tablets that are
+already larger than the given size. This can leave isolated small
+tablets. To force small tablets to be merged into larger tablets use
+the `--force` option:
+
+    root@myinstance> merge -t myTable -s 100M --force
+
+Merging away small tablets works on one section at a time. If your
+table contains many sections of small split points, or you are
+attempting to change the split size of the entire table, it will be
+faster to set the split point and merge the entire table:
+
+    root@myinstance> config -t myTable -s table.split.threshold=256M
+    root@myinstance> merge -t myTable
+
+## Delete Range
+
+Consider an indexing scheme that uses date information in each row.
+For example ``20110823-15:20:25.013'' might be a row that specifies a
+date and time. In some cases, we might like to delete rows based on
+this date, say to remove all the data older than the current year.
+Accumulo supports a delete range operation which efficiently
+removes data between two rows. For example:
+
+    root@myinstance> deleterange -t myTable -s 2010 -e 2011
+
+This will delete all rows starting with ``2010'' and it will stop at
+any row starting ``2011''. You can delete any data prior to 2011
+with:
+
+    root@myinstance> deleterange -t myTable -e 2011 --force
+
+The shell will not allow you to delete an unbounded range (no start)
+unless you provide the `--force` option.
+
+Range deletion is implemented using splits at the given start/end
+positions, and will affect the number of splits in the table.
+
+## Cloning Tables
+
+A new table can be created that points to an existing table's data. This is a
+very quick metadata operation, no data is actually copied. The cloned table
+and the source table can change independently after the clone operation. One
+use case for this feature is testing. For example to test a new filtering
+iterator, clone the table, add the filter to the clone, and force a major
+compaction. To perform a test on less data, clone a table and then use delete
+range to efficiently remove a lot of data from the clone. Another use case is
+generating a snapshot to guard against human error. To create a snapshot,
+clone a table and then disable write permissions on the clone.
+
+The clone operation will point to the source table's files. This is why the
+flush option is present and is enabled by default in the shell. If the flush
+option is not enabled, then any data the source table currently has in memory
+will not exist in the clone.
+
+A cloned table copies the configuration of the source table. However the
+permissions of the source table are not copied to the clone. After a clone is
+created, only the user that created the clone can read and write to it.
+
+In the following example we see that data inserted after the clone operation is
+not visible in the clone.
+
+```
+root@a14> createtable people
+
+root@a14 people> insert 890435 name last Doe
+root@a14 people> insert 890435 name first John
+
+root@a14 people> clonetable people test
+
+root@a14 people> insert 890436 name first Jane
+root@a14 people> insert 890436 name last Doe
+
+root@a14 people> scan
+890435 name:first []    John
+890435 name:last []    Doe
+890436 name:first []    Jane
+890436 name:last []    Doe
+
+root@a14 people> table test
+
+root@a14 test> scan
+890435 name:first []    John
+890435 name:last []    Doe
+
+root@a14 test>
+```
+
+The du command in the shell shows how much space a table is using in HDFS.
+This command can also show how much overlapping space two cloned tables have in
+HDFS. In the example below du shows table ci is using 428M. Then ci is cloned
+to cic and du shows that both tables share 428M. After three entries are
+inserted into cic and its flushed, du shows the two tables still share 428M but
+cic has 226 bytes to itself. Finally, table cic is compacted and then du shows
+that each table uses 428M.
+
+```
+root@a14> du ci
+             428,482,573 [ci]
+
+root@a14> clonetable ci cic
+
+root@a14> du ci cic
+             428,482,573 [ci, cic]
+
+root@a14> table cic
+
+root@a14 cic> insert r1 cf1 cq1 v1
+root@a14 cic> insert r1 cf1 cq2 v2
+root@a14 cic> insert r1 cf1 cq3 v3
+
+root@a14 cic> flush -t cic -w
+27 15:00:13,908 [shell.Shell] INFO : Flush of table cic completed.
+
+root@a14 cic> du ci cic
+             428,482,573 [ci, cic]
+                     226 [cic]
+
+root@a14 cic> compact -t cic -w
+27 15:00:35,871 [shell.Shell] INFO : Compacting table ...
+27 15:03:03,303 [shell.Shell] INFO : Compaction of table cic completed for 
given range
+
+root@a14 cic> du ci cic
+             428,482,573 [ci]
+             428,482,612 [cic]
+
+root@a14 cic>
+```
+
+## Exporting Tables
+
+Accumulo supports exporting tables for the purpose of copying tables to another
+cluster. Exporting and importing tables preserves the tables configuration,
+splits, and logical time. Tables are exported and then copied via the hadoop
+distcp command. To export a table, it must be offline and stay offline while
+discp runs. The reason it needs to stay offline is to prevent files from being
+deleted. A table can be cloned and the clone taken offline inorder to avoid
+losing access to the table. See the [export 
example](https://github.com/apache/accumulo-examples/blob/master/docs/export.md)
+for example code.
+
+[config]: /docs/{{ page.version }}/config/

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/7cc70b2e/_docs-unreleased/getting-started/table_design.md
----------------------------------------------------------------------
diff --git a/_docs-unreleased/getting-started/table_design.md 
b/_docs-unreleased/getting-started/table_design.md
new file mode 100644
index 0000000..fb547db
--- /dev/null
+++ b/_docs-unreleased/getting-started/table_design.md
@@ -0,0 +1,303 @@
+---
+title: Table Design
+category: getting-started
+order: 4
+---
+
+### Basic Table
+
+Since Accumulo tables are sorted by row ID, each table can be thought of as 
being
+indexed by the row ID. Lookups performed by row ID can be executed quickly, by 
doing
+a binary search, first across the tablets, and then within a tablet. Clients 
should
+choose a row ID carefully in order to support their desired application. A 
simple rule
+is to select a unique identifier as the row ID for each entity to be stored 
and assign
+all the other attributes to be tracked to be columns under this row ID. For 
example,
+if we have the following data in a comma-separated file:
+
+    userid,age,address,account-balance
+
+We might choose to store this data using the userid as the rowID, the column
+name in the column family, and a blank column qualifier:
+
+```java
+Mutation m = new Mutation(userid);
+final String column_qualifier = "";
+m.put("age", column_qualifier, age);
+m.put("address", column_qualifier, address);
+m.put("balance", column_qualifier, account_balance);
+
+writer.add(m);
+```
+
+We could then retrieve any of the columns for a specific userid by specifying 
the
+userid as the range of a scanner and fetching specific columns:
+
+```java
+Range r = new Range(userid, userid); // single row
+Scanner s = conn.createScanner("userdata", auths);
+s.setRange(r);
+s.fetchColumnFamily(new Text("age"));
+
+for(Entry<Key,Value> entry : s) {
+  System.out.println(entry.getValue().toString());
+}
+```
+
+### RowID Design
+
+Often it is necessary to transform the rowID in order to have rows ordered in 
a way
+that is optimal for anticipated access patterns. A good example of this is 
reversing
+the order of components of internet domain names in order to group rows of the
+same parent domain together:
+
+    com.google.code
+    com.google.labs
+    com.google.mail
+    com.yahoo.mail
+    com.yahoo.research
+
+Some data may result in the creation of very large rows - rows with many 
columns.
+In this case the table designer may wish to split up these rows for better load
+balancing while keeping them sorted together for scanning purposes. This can be
+done by appending a random substring at the end of the row:
+
+    com.google.code_00
+    com.google.code_01
+    com.google.code_02
+    com.google.labs_00
+    com.google.mail_00
+    com.google.mail_01
+
+It could also be done by adding a string representation of some period of time 
such as date to the week
+or month:
+
+    com.google.code_201003
+    com.google.code_201004
+    com.google.code_201005
+    com.google.labs_201003
+    com.google.mail_201003
+    com.google.mail_201004
+
+Appending dates provides the additional capability of restricting a scan to a 
given
+date range.
+
+### Lexicoders
+Since Keys in Accumulo are sorted lexicographically by default, it's often 
useful to encode
+common data types into a byte format in which their sort order corresponds to 
the sort order
+in their native form. An example of this is encoding dates and numerical data 
so that they can
+be better seeked or searched in ranges.
+
+The lexicoders are a standard and extensible way of encoding Java types. 
Here's an example
+of a lexicoder that encodes a java Date object so that it sorts 
lexicographically:
+
+```java
+// create new date lexicoder
+DateLexicoder dateEncoder = new DateLexicoder();
+
+// truncate time to hours
+long epoch = System.currentTimeMillis();
+Date hour = new Date(epoch - (epoch % 3600000));
+
+// encode the rowId so that it is sorted lexicographically
+Mutation mutation = new Mutation(dateEncoder.encode(hour));
+mutation.put(new Text("colf"), new Text("colq"), new Value(new byte[]{}));
+```
+
+If we want to return the most recent date first, we can reverse the sort order
+with the reverse lexicoder:
+
+```java
+// create new date lexicoder and reverse lexicoder
+DateLexicoder dateEncoder = new DateLexicoder();
+ReverseLexicoder reverseEncoder = new ReverseLexicoder(dateEncoder);
+
+// truncate date to hours
+long epoch = System.currentTimeMillis();
+Date hour = new Date(epoch - (epoch % 3600000));
+
+// encode the rowId so that it sorts in reverse lexicographic order
+Mutation mutation = new Mutation(reverseEncoder.encode(hour));
+mutation.put(new Text("colf"), new Text("colq"), new Value(new byte[]{}));
+```
+
+### Indexing
+
+In order to support lookups via more than one attribute of an entity, 
additional
+indexes can be built. However, because Accumulo tables can support any number 
of
+columns without specifying them beforehand, a single additional index will 
often
+suffice for supporting lookups of records in the main table. Here, the index 
has, as
+the rowID, the Value or Term from the main table, the column families are the 
same,
+and the column qualifier of the index table contains the rowID from the main 
table.
+
+| RowID | Column Family | Column Qualifier | Value |
+|-------|---------------|------------------|-------|
+| Term  | Field Name    | MainRowID        |       |
+
+Note: We store rowIDs in the column qualifier rather than the Value so that we 
can
+have more than one rowID associated with a particular term within the index. 
If we
+stored this in the Value we would only see one of the rows in which the value
+appears since Accumulo is configured by default to return the one most recent
+value associated with a key.
+
+Lookups can then be done by scanning the Index Table first for occurrences of 
the
+desired values in the columns specified, which returns a list of row ID from 
the main
+table. These can then be used to retrieve each matching record, in their 
entirety, or a
+subset of their columns, from the Main Table.
+
+To support efficient lookups of multiple rowIDs from the same table, the 
Accumulo
+client library provides a BatchScanner. Users specify a set of Ranges to the
+BatchScanner, which performs the lookups in multiple threads to multiple 
servers
+and returns an Iterator over all the rows retrieved. The rows returned are NOT 
in
+sorted order, as is the case with the basic Scanner interface.
+
+```java
+// first we scan the index for IDs of rows matching our query
+Text term = new Text("mySearchTerm");
+
+HashSet<Range> matchingRows = new HashSet<Range>();
+
+Scanner indexScanner = createScanner("index", auths);
+indexScanner.setRange(new Range(term, term));
+
+// we retrieve the matching rowIDs and create a set of ranges
+for(Entry<Key,Value> entry : indexScanner) {
+    matchingRows.add(new Range(entry.getKey().getColumnQualifier()));
+}
+
+// now we pass the set of rowIDs to the batch scanner to retrieve them
+BatchScanner bscan = conn.createBatchScanner("table", auths, 10);
+bscan.setRanges(matchingRows);
+bscan.fetchColumnFamily(new Text("attributes"));
+
+for(Entry<Key,Value> entry : bscan) {
+    System.out.println(entry.getValue());
+}
+```
+
+One advantage of the dynamic schema capabilities of Accumulo is that different
+fields may be indexed into the same physical table. However, it may be 
necessary to
+create different index tables if the terms must be formatted differently in 
order to
+maintain proper sort order. For example, real numbers must be formatted
+differently than their usual notation in order to be sorted correctly. In 
these cases,
+usually one index per unique data type will suffice.
+
+### Entity-Attribute and Graph Tables
+
+Accumulo is ideal for storing entities and their attributes, especially of the
+attributes are sparse. It is often useful to join several datasets together on 
common
+entities within the same table. This can allow for the representation of 
graphs,
+including nodes, their attributes, and connections to other nodes.
+
+Rather than storing individual events, Entity-Attribute or Graph tables store
+aggregate information about the entities involved in the events and the
+relationships between entities. This is often preferrable when single events 
aren't
+very useful and when a continuously updated summarization is desired.
+
+The physical schema for an entity-attribute or graph table is as follows:
+
+|RowID    |Column Family  |Column Qualifier |Value |
+|---------|---------------|-----------------|------|
+|EntityID |Attribute Name |Attribute Value  |Weight|
+|EntityID |Edge Type      |Related EntityID |Weight|
+
+For example, to keep track of employees, managers and products the following
+entity-attribute table could be used. Note that the weights are not always 
necessary
+and are set to 0 when not used.
+
+|RowID |Column Family |Column Qualifier |Value
+|------|--------------|-----------------|-----
+| E001 | name         | bob             | 0
+| E001 | department   | sales           | 0
+| E001 | hire_date    | 20030102        | 0
+| E001 | units_sold   | P001            | 780
+| E002 | name         | george          | 0
+| E002 | department   | sales           | 0
+| E002 | manager_of   | E001            | 0
+| E002 | manager_of   | E003            | 0
+| E003 | name         | harry           | 0
+| E003 | department   | accounts_recv   | 0
+| E003 | hire_date    | 20000405        | 0
+| E003 | units_sold   | P002            | 566
+| E003 | units_sold   | P001            | 232
+| P001 | product_name | nike_airs       | 0
+| P001 | product_type | shoe            | 0
+| P001 | in_stock     | germany         | 900
+| P001 | in_stock     | brazil          | 200
+| P002 | product_name | basic_jacket    | 0
+| P002 | product_type | clothing        | 0
+| P002 | in_stock     | usa             | 3454
+| P002 | in_stock     | germany         | 700
+
+To allow efficient updating of edge weights, an aggregating iterator can be
+configured to add the value of all mutations applied with the same key. These 
types
+of tables can easily be created from raw events by simply extracting the 
entities,
+attributes, and relationships from individual events and inserting the keys 
into
+Accumulo each with a count of 1. The aggregating iterator will take care of
+maintaining the edge weights.
+
+### Document-Partitioned Indexing
+
+Using a simple index as described above works well when looking for records 
that
+match one of a set of given criteria. When looking for records that match more 
than
+one criterion simultaneously, such as when looking for documents that contain 
all of
+the words 'the' and 'white' and 'house', there are several issues.
+
+First is that the set of all records matching any one of the search terms must 
be sent
+to the client, which incurs a lot of network traffic. The second problem is 
that the
+client is responsible for performing set intersection on the sets of records 
returned
+to eliminate all but the records matching all search terms. The memory of the 
client
+may easily be overwhelmed during this operation.
+
+For these reasons Accumulo includes support for a scheme known as sharded
+indexing, in which these set operations can be performed at the TabletServers 
and
+decisions about which records to include in the result set can be made without
+incurring network traffic.
+
+This is accomplished via partitioning records into bins that each reside on at 
most
+one TabletServer, and then creating an index of terms per record within each 
bin as
+follows:
+
+|RowID |Column Family |Column Qualifier |Value
+|------|--------------|-----------------|------
+|BinID |Term          |DocID            |Weight
+
+Documents or records are mapped into bins by a user-defined ingest 
application. By
+storing the BinID as the RowID we ensure that all the information for a 
particular
+bin is contained in a single tablet and hosted on a single TabletServer since
+Accumulo never splits rows across tablets. Storing the Terms as column families
+serves to enable fast lookups of all the documents within this bin that 
contain the
+given term.
+
+Finally, we perform set intersection operations on the TabletServer via a 
special
+iterator called the Intersecting Iterator. Since documents are partitioned 
into many
+bins, a search of all documents must search every bin. We can use the 
BatchScanner
+to scan all bins in parallel. The Intersecting Iterator should be enabled on a
+BatchScanner within user query code as follows:
+
+```java
+Text[] terms = {new Text("the"), new Text("white"), new Text("house")};
+
+BatchScanner bscan = conn.createBatchScanner(table, auths, 20);
+
+IteratorSetting iter = new IteratorSetting(20, "ii", 
IntersectingIterator.class);
+IntersectingIterator.setColumnFamilies(iter, terms);
+
+bscan.addScanIterator(iter);
+bscan.setRanges(Collections.singleton(new Range()));
+
+for(Entry<Key,Value> entry : bscan) {
+    System.out.println(" " + entry.getKey().getColumnQualifier());
+}
+```
+
+This code effectively has the BatchScanner scan all tablets of a table, 
looking for
+documents that match all the given terms. Because all tablets are being 
scanned for
+every query, each query is more expensive than other Accumulo scans, which
+typically involve a small number of TabletServers. This reduces the number of
+concurrent queries supported and is subject to what is known as the 'straggler'
+problem in which every query runs as slow as the slowest server participating.
+
+Of course, fast servers will return their results to the client which can 
display them
+to the user immediately while they wait for the rest of the results to arrive. 
If the
+results are unordered this is quite effective as the first results to arrive 
are as good
+as any others to the user.

http://git-wip-us.apache.org/repos/asf/accumulo-website/blob/7cc70b2e/_docs-unreleased/index.md
----------------------------------------------------------------------
diff --git a/_docs-unreleased/index.md b/_docs-unreleased/index.md
new file mode 100644
index 0000000..02e840c
--- /dev/null
+++ b/_docs-unreleased/index.md
@@ -0,0 +1,16 @@
+---
+title: Apache Accumulo documentation
+nodoctitle: true
+---
+
+# Accumulo {{ page.version }} documentation
+
+Welcome to the documentation for the {{ page.version }} release of Apache 
Accumulo.
+
+Use the sidebar to left the navigate the documentation.
+
+The links below contain additional documentation for this release:
+
+* [Release notes](/release/accumulo-1.8.1)
+* [Javadocs](/1.8/apidocs)
+* [Examples](/1.8/examples)

Reply via email to