Author: buildbot
Date: Mon May  5 15:50:01 2014
New Revision: 908107

Log:
Staging update by buildbot for accumulo

Modified:
    websites/staging/accumulo/trunk/content/   (props changed)
    websites/staging/accumulo/trunk/content/release_notes/1.6.0.html

Propchange: websites/staging/accumulo/trunk/content/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Mon May  5 15:50:01 2014
@@ -1 +1 @@
-1592557
+1592559

Modified: websites/staging/accumulo/trunk/content/release_notes/1.6.0.html
==============================================================================
--- websites/staging/accumulo/trunk/content/release_notes/1.6.0.html (original)
+++ websites/staging/accumulo/trunk/content/release_notes/1.6.0.html Mon May  5 
15:50:01 2014
@@ -197,12 +197,12 @@ Latest 1.4 release: <strong>1.4.5</stron
 <p>Accumulo 1.6.0 runs on Hadoop 1, however Hadoop 2 with HA namenode is 
recommended for production systems.  In addition to HA, Hadoop 2 also offers 
better data durability guarantees, in the case when nodes lose power, than 
Hadoop 1.</p>
 <h2 id="notable-improvements">Notable Improvements</h2>
 <h3 id="multiple-volume-support">Multiple volume support</h3>
-<p>[BigTable's][1] design allows for its internal metadata to automatically 
spread across multiple nodes.  Accumulo has followed this design and scales 
very well as a result.  There is one impediment to scaling though, and this is 
the HDFS namenode.  There are two problems with the namenode when it comes to 
scaling.  First, the namenode stores all of its filesystem metadata in memory 
on a single machine.  This introduces an upper bound on the number of files 
Accumulo can have.  Second, there is an upper bound on the number of file 
operations per second that a single namenode can support.  For example, a 
namenode can only support a few thousand delete or create file request per 
second.  </p>
-<p>To overcome this bottleneck, support for multiple namenodes was added under 
[ACCUMULO-118][ACCUMULO-118].  This change allows Accumulo to store its files 
across multiple namenodes.  To use this feature, place comma separated list of 
namenode URIs in the new <em>instance.volumes</em> configuration property in 
accumulo-site.xml.  When upgrading to 1.6.0 and multiple namenode support is 
desired, modify this setting <strong>only</strong> after a successful 
upgrade.</p>
+<p><a href="http://research.google.com/archive/bigtable.html";>BigTable's</a> 
design allows for its internal metadata to automatically spread across multiple 
nodes.  Accumulo has followed this design and scales very well as a result.  
There is one impediment to scaling though, and this is the HDFS namenode.  
There are two problems with the namenode when it comes to scaling.  First, the 
namenode stores all of its filesystem metadata in memory on a single machine.  
This introduces an upper bound on the number of files Accumulo can have.  
Second, there is an upper bound on the number of file operations per second 
that a single namenode can support.  For example, a namenode can only support a 
few thousand delete or create file request per second.  </p>
+<p>To overcome this bottleneck, support for multiple namenodes was added under 
<a href="https://issues.apache.org/jira/browse/ACCUMULO-118"; title="Multiple 
namenode support">ACCUMULO-118</a>.  This change allows Accumulo to store its 
files across multiple namenodes.  To use this feature, place comma separated 
list of namenode URIs in the new <em>instance.volumes</em> configuration 
property in accumulo-site.xml.  When upgrading to 1.6.0 and multiple namenode 
support is desired, modify this setting <strong>only</strong> after a 
successful upgrade.</p>
 <h3 id="table-namespaces">Table namespaces</h3>
-<p>Administering an Accumulo instance with many tables is cumbersome.  To ease 
this, [ACCUMULO-802][ACCUMULO-802] introduced table namespaces which allow 
tables to be grouped into logical collections.  This allows configuration and 
permission changes to made to a namespace, which will apply to all of its 
tables.</p>
+<p>Administering an Accumulo instance with many tables is cumbersome.  To ease 
this, <a href="https://issues.apache.org/jira/browse/ACCUMULO-802"; title="Table 
namespaces">ACCUMULO-802</a> introduced table namespaces which allow tables to 
be grouped into logical collections.  This allows configuration and permission 
changes to made to a namespace, which will apply to all of its tables.</p>
 <h3 id="conditional-mutations">Conditional Mutations</h3>
-<p>Accumulo now offers a way to make atomic read,modify,write row changes from 
the client side.  Atomic test and set row operations make this possible.  
[ACCUMULO-1000][ACCUMULO-1000] added conditional mutations and a conditional 
writer.  A conditional mutation has tests on columns that must pass before any 
changes are made.  These test are executed in server processes while a row lock 
is held.  Below is a simple example of making atomic row changes using 
conditional mutations.</p>
+<p>Accumulo now offers a way to make atomic read,modify,write row changes from 
the client side.  Atomic test and set row operations make this possible.  <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-1000"; title="Conditional 
Mutations">ACCUMULO-1000</a> added conditional mutations and a conditional 
writer.  A conditional mutation has tests on columns that must pass before any 
changes are made.  These test are executed in server processes while a row lock 
is held.  Below is a simple example of making atomic row changes using 
conditional mutations.</p>
 <ol>
 <li>Read columns X,Y,SEQ into a,b,s from row R1 using an isolated scanner.</li>
 <li>For row R1 write conditional mutation X=f(a),Y=g(b),SEQ=s+1 if SEQ==s.</li>
@@ -210,12 +210,12 @@ Latest 1.4 release: <strong>1.4.5</stron
 </ol>
 <p>The only built in test that conditional mutations support are equality and 
isNull.  However, iterators can be configured on a conditional mutation to run 
before these test.  This makes it possible to implement any number of test such 
as less than, greater than, contains, etc.</p>
 <h3 id="encryption">Encryption</h3>
-<p>Encryption is still an experimental feature, but much progress has been 
made since 1.5.0.  Support for encrypting rfiles and write ahead logs were 
added in [ACCUMULO-958][ACCUMULO-958] and [ACCUMULO-980][ACCUMULO-980].  
Support for encrypting data over the wire using SSL was added in 
[ACCUMULO-1009][ACCUMULO-1009].</p>
-<p>When a tablet server fails, its write ahead logs are sorted and stored in 
HDFS.  In 1.6.0, encrypting these sorted write ahead logs is not supported.  
[ACCUMULO-981][ACCUMULO-981] addresses this issue.  </p>
+<p>Encryption is still an experimental feature, but much progress has been 
made since 1.5.0.  Support for encrypting rfiles and write ahead logs were 
added in <a href="https://issues.apache.org/jira/browse/ACCUMULO-958"; 
title="Support pluggable encryption in walogs">ACCUMULO-958</a> and <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-980"; title="Support 
pluggable codecs for RFile">ACCUMULO-980</a>.  Support for encrypting data over 
the wire using SSL was added in <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-1009"; title="Support 
encryption over the wire">ACCUMULO-1009</a>.</p>
+<p>When a tablet server fails, its write ahead logs are sorted and stored in 
HDFS.  In 1.6.0, encrypting these sorted write ahead logs is not supported.  <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-981"; title="support 
pluggable encryption when recovering write-ahead logs">ACCUMULO-981</a> 
addresses this issue.  </p>
 <h3 id="pluggable-compaction-strategies">Pluggable compaction strategies</h3>
-<p>One of the key elements of the [BigTable][1] design is use of the [Log 
Structured Merge Tree][2].  This entails sorting data in memory, writing out 
sorted files, and then later merging multiple sorted files into a single file.  
 These automatic merges happen in the background and Accumulo decides when to 
merge files based comparing relative sizes of files to a compaction ratio.  
Adjusting the compaction ratio is the only way a user can control this process. 
 [ACCUMULO-1451][ACCUMULO-1451] introduces pluggable compaction strategies 
which allow users to choose when and what files to compact.  
[ACCUMULO-1808][ACCUMULO-1808] adds a compaction strategy the prevents 
compaction of files over a configurable size.</p>
+<p>One of the key elements of the <a 
href="http://research.google.com/archive/bigtable.html";>BigTable</a> design is 
use of the <a 
href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.44.2782&amp;rep=rep1&amp;type=pdf";>Log
 Structured Merge Tree</a>.  This entails sorting data in memory, writing out 
sorted files, and then later merging multiple sorted files into a single file.  
 These automatic merges happen in the background and Accumulo decides when to 
merge files based comparing relative sizes of files to a compaction ratio.  
Adjusting the compaction ratio is the only way a user can control this process. 
 <a href="https://issues.apache.org/jira/browse/ACCUMULO-1451"; title="Make 
Compaction triggers extensible">ACCUMULO-1451</a> introduces pluggable 
compaction strategies which allow users to choose when and what files to 
compact.  <a href="https://issues.apache.org/jira/browse/ACCUMULO-1808"; 
title="Create compaction strategy that has size limit">ACCUMULO-1808</a> adds a 
com
 paction strategy the prevents compaction of files over a configurable size.</p>
 <h3 id="lexicoders">Lexicoders</h3>
-<p>Accumulo only sorts data lexicographically.  Getting something like a pair 
of (<em>String</em>,<em>Integer</em>) to sort correctly in Accumulo is tricky.  
It's tricky because you only want to compare the integers if the strings are 
equal.  It's possible to make this sort properly in Accumulo if the data is 
encoded properly, but can be difficult.  To make this easier 
[ACCUMULO-1336][ACCUMULO-1336] added Lexicoders to the Accumulo API.  
Lexicoders provide an easy way to serialize data so that it sorts properly 
lexicographically.  Below is a simple example.</p>
+<p>Accumulo only sorts data lexicographically.  Getting something like a pair 
of (<em>String</em>,<em>Integer</em>) to sort correctly in Accumulo is tricky.  
It's tricky because you only want to compare the integers if the strings are 
equal.  It's possible to make this sort properly in Accumulo if the data is 
encoded properly, but can be difficult.  To make this easier <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-1336"; title="Add 
lexicoders from Typo to Accumulo">ACCUMULO-1336</a> added Lexicoders to the 
Accumulo API.  Lexicoders provide an easy way to serialize data so that it 
sorts properly lexicographically.  Below is a simple example.</p>
 <div class="codehilite"><pre>   <span class="n">PairLexicoder</span> <span 
class="n">plex</span> <span class="p">=</span> <span class="n">new</span> <span 
class="n">PairLexicoder</span><span class="p">(</span><span 
class="n">new</span> <span class="n">StringLexicoder</span><span 
class="p">(),</span> <span class="n">new</span> <span 
class="n">IntegerLexicoder</span><span class="p">());</span>
    <span class="n">byte</span><span class="p">[]</span> <span 
class="n">ba1</span> <span class="p">=</span> <span class="n">plex</span><span 
class="p">.</span><span class="n">encode</span><span class="p">(</span><span 
class="n">new</span> <span class="n">ComparablePair</span><span 
class="o">&lt;</span><span class="n">String</span><span class="p">,</span> 
<span class="n">Integer</span><span class="o">&gt;</span><span 
class="p">(</span>&quot;<span class="n">b</span>&quot;<span 
class="p">,</span>1<span class="p">));</span>
    <span class="n">byte</span><span class="p">[]</span> <span 
class="n">ba2</span> <span class="p">=</span> <span class="n">plex</span><span 
class="p">.</span><span class="n">encode</span><span class="p">(</span><span 
class="n">new</span> <span class="n">ComparablePair</span><span 
class="o">&lt;</span><span class="n">String</span><span class="p">,</span> 
<span class="n">Integer</span><span class="o">&gt;</span><span 
class="p">(</span>&quot;<span class="n">aa</span>&quot;<span 
class="p">,</span>1<span class="p">));</span>
@@ -228,16 +228,16 @@ Latest 1.4 release: <strong>1.4.5</stron
 
 
 <h3 id="locality-groups-in-memory">Locality groups in memory</h3>
-<p>In cases where a very small amount of data is stored in a locality group 
one would expect fast scans over that locality group.  However this was not 
always the case because recently written data stored in memory was not 
partitioned by locality group.  Therefore if a table had 100GB of data in 
memory and 1MB of that was in locality group A, then scanning A would have 
required reading all 100GB.  [ACCUMULO-112][ACCUMULO-112] changes this and 
partitions data by locality group as its written.</p>
+<p>In cases where a very small amount of data is stored in a locality group 
one would expect fast scans over that locality group.  However this was not 
always the case because recently written data stored in memory was not 
partitioned by locality group.  Therefore if a table had 100GB of data in 
memory and 1MB of that was in locality group A, then scanning A would have 
required reading all 100GB.  <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-112"; title="Partition data 
in memory by locality group">ACCUMULO-112</a> changes this and partitions data 
by locality group as its written.</p>
 <h3 id="service-ip-addresses">Service IP addresses</h3>
-<p>Previous versions of Accumulo always used IP addresses internally.  This 
could be problematic in virtual machine environments where IP addresses change. 
 In [ACCUMULO-1585][ACCUMULO-1585] this was changed, now the accumulo uses the 
exact hostnames from its config files for internal addressing.  </p>
-<p>All Accumulo processes running on a cluster are locatable via zookeeper.  
Therefore using well known ports is not really required.  
[ACCUMULO-1664][ACCUMULO-1664] makes it possible to for all Accumulo processes 
to use random ports.  This makes it easier to run multiple Accumulo instances 
on a single node.   </p>
-<p>While Hadoop [does not support IPv6 networks][3], attempting to run on a 
system that does not have IPv6 completely disabled can cause strange failures. 
[ACCUMULO-2262][ACCUMULO-2262] invokes the JVM-provided configuration parameter 
at process startup to prefer IPv4 over IPv6.</p>
+<p>Previous versions of Accumulo always used IP addresses internally.  This 
could be problematic in virtual machine environments where IP addresses change. 
 In <a href="https://issues.apache.org/jira/browse/ACCUMULO-1585"; title="Use 
FQDN/verbatim data from config files">ACCUMULO-1585</a> this was changed, now 
the accumulo uses the exact hostnames from its config files for internal 
addressing.  </p>
+<p>All Accumulo processes running on a cluster are locatable via zookeeper.  
Therefore using well known ports is not really required.  <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-1664"; title="Make all 
processes able to use random ports">ACCUMULO-1664</a> makes it possible to for 
all Accumulo processes to use random ports.  This makes it easier to run 
multiple Accumulo instances on a single node.   </p>
+<p>While Hadoop <a href="http://wiki.apache.org/hadoop/HadoopIPv6";>does not 
support IPv6 networks</a>, attempting to run on a system that does not have 
IPv6 completely disabled can cause strange failures. <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-2262"; title="Include 
java.net.preferIPv4Stack=true in process startup">ACCUMULO-2262</a> invokes the 
JVM-provided configuration parameter at process startup to prefer IPv4 over 
IPv6.</p>
 <h3 id="viewfs">ViewFS</h3>
-<p>Multiple bug-fixes were made to support running Accumulo over multiple HDFS 
instances using ViewFS. [ACCUMULO-2047][ACCUMULO-2047] is the parent
+<p>Multiple bug-fixes were made to support running Accumulo over multiple HDFS 
instances using ViewFS. <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-2047"; title="Failures 
using viewfs with multiple namenodes">ACCUMULO-2047</a> is the parent
 ticket that contains numerous fixes to enable this support.</p>
 <h3 id="maven-plugin">Maven Plugin</h3>
-<p>This version of Accumulo is accompanied by a new maven plugin for testing 
client apps ([ACCUMULO-1030][ACCUMULO-1030]). You can execute the 
accumulo-maven-plugin inside your project by adding the following to your 
pom.xml's build plugins section:</p>
+<p>This version of Accumulo is accompanied by a new maven plugin for testing 
client apps (<a href="https://issues.apache.org/jira/browse/ACCUMULO-1030"; 
title="Create a Maven plugin to run MiniAccumuloCluster for integration 
testing">ACCUMULO-1030</a>). You can execute the accumulo-maven-plugin inside 
your project by adding the following to your pom.xml's build plugins 
section:</p>
 <div class="codehilite"><pre>  <span class="nt">&lt;plugin&gt;</span>
     <span class="nt">&lt;groupId&gt;</span>org.apache.accumulo<span 
class="nt">&lt;/groupId&gt;</span>
     <span class="nt">&lt;artifactId&gt;</span>accumulo-maven-plugin<span 
class="nt">&lt;/artifactId&gt;</span>
@@ -288,39 +288,39 @@ As performance can suffer when large Key
 command. See the help message on the command for more information.</p>
 <h3 id="other-notable-changes">Other notable changes</h3>
 <ul>
-<li>[ACCUMULO-842][ACCUMULO-842] Added FATE administration to shell</li>
-<li>[ACCUMULO-1442][ACCUMULO-1442] JLine2 support was added to the shell.  
This adds features like history search and other nice things GNU Readline has. 
</li>
-<li>[ACCUMULO-1481][ACCUMULO-1481] The root tablet is now the root table.</li>
-<li>[ACCUMULO-1566][ACCUMULO-1566] When read-ahead starts in the scanner is 
now configurable.</li>
-<li>[ACCUMULO-1667][ACCUMULO-1667] Added a synchronous version of online and 
offline table</li>
-<li>[ACCUMULO-1833][ACCUMULO-1833] Multitable batch writer is faster now when 
used by multiple threads</li>
-<li>[ACCUMULO-1933][ACCUMULO-1933] Lower case can be given for memory units 
now.</li>
-<li>[ACCUMULO-1985][ACCUMULO-1985] Configuration to bind Monitor on all 
network interfaces.</li>
-<li>[ACCUMULO-2128][ACCUMULO-2128] Provide resource cleanup via static 
utility</li>
-<li>[ACCUMULO-2360][ACCUMULO-2360] Allow configuration of the maximum thrift 
message size a server will read.</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-842"; title="Add 
FATE administration to shell">ACCUMULO-842</a> Added FATE administration to 
shell</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1442"; 
title="Replace JLine with JLine2">ACCUMULO-1442</a> JLine2 support was added to 
the shell.  This adds features like history search and other nice things GNU 
Readline has. </li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1481"; title="Root 
tablet in its own table">ACCUMULO-1481</a> The root tablet is now the root 
table.</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1566"; title="Add 
ability for client to start Scanner readahead immediately">ACCUMULO-1566</a> 
When read-ahead starts in the scanner is now configurable.</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1667"; title="Allow 
On/Offline Command To Execute Synchronously">ACCUMULO-1667</a> Added a 
synchronous version of online and offline table</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1833"; 
title="MultiTableBatchWriterImpl.getBatchWriter() is not performant for 
multiple threads">ACCUMULO-1833</a> Multitable batch writer is faster now when 
used by multiple threads</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1933"; title="Make 
unit on memory parameters case-insensitive">ACCUMULO-1933</a> Lower case can be 
given for memory units now.</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1985"; 
title="Cannot bind monitor on remote host to all interfaces">ACCUMULO-1985</a> 
Configuration to bind Monitor on all network interfaces.</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2128"; 
title="Provide resource cleanup via static utility rather than 
Instance.close">ACCUMULO-2128</a> Provide resource cleanup via static 
utility</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2360"; title="Need 
a way to configure TNonblockingServer.maxReadBufferBytes to prevent 
OOMs">ACCUMULO-2360</a> Allow configuration of the maximum thrift message size 
a server will read.</li>
 </ul>
 <h2 id="notable-bug-fixes">Notable Bug Fixes</h2>
 <ul>
-<li>[ACCUMULO-324][ACCUMULO-324] System/site constraints and iterators should 
NOT affect the METADATA table</li>
-<li>[ACCUMULO-335][ACCUMULO-335] Can't batchscan over the !METADATA table</li>
-<li>[ACCUMULO-391][ACCUMULO-391] Added support for reading from multiple 
tables in a Map Reduce job.</li>
-<li>[ACCUMULO-1018][ACCUMULO-1018] Client does not give informative message 
when user can not read table</li>
-<li>[ACCUMULO-1492][ACCUMULO-1492] bin/accumulo should follow symbolic 
links</li>
-<li>[ACCUMULO-1572][ACCUMULO-1572] Single node zookeeper failure kills 
connected Accumulo servers</li>
-<li>[ACCUMULO-1661][ACCUMULO-1661] AccumuloInputFormat cannot fetch empty 
column family</li>
-<li>[ACCUMULO-1696][ACCUMULO-1696] Deep copy in the compaction scope iterators 
can throw off the stats</li>
-<li>[ACCUMULO-1698][ACCUMULO-1698] stop-here doesn't consider system 
hostname</li>
-<li>[ACCUMULO-1901][ACCUMULO-1901] start-here.sh starts only one GC process 
even if more are defined</li>
-<li>[ACCUMULO-1920][ACCUMULO-1920] Monitor was not seeing zookeeper updates 
for tables</li>
-<li>[ACCUMULO-1994][ACCUMULO-1994] Proxy does not handle Key timestamps 
correctly</li>
-<li>[ACCUMULO-2037][ACCUMULO-2037] Tablets are now assigned to the last 
location </li>
-<li>[ACCUMULO-2174][ACCUMULO-2174] VFS Classloader has potential to collide 
localized resources</li>
-<li>[ACCUMULO-2225][ACCUMULO-2225] Need to better handle DNS failure 
propagation from Hadoop</li>
-<li>[ACCUMULO-2234][ACCUMULO-2234] Cannot run offline mapreduce over 
non-default instance.dfs.dir value</li>
-<li>[ACCUMULO-2261][ACCUMULO-2261] Duplicate locations for a Tablet.</li>
-<li>[ACCUMULO-2334][ACCUMULO-2334] Lacking fallback when ACCUMULO_LOG_HOST 
isn't set</li>
-<li>[ACCUMULO-2408][ACCUMULO-2408] metadata table not assigned after root 
table is loaded</li>
-<li>[ACCUMULO-2519][ACCUMULO-2519] FATE operation failed across upgrade</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-324"; 
title="System/site constraints and iterators should NOT affect the METADATA 
table">ACCUMULO-324</a> System/site constraints and iterators should NOT affect 
the METADATA table</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-335"; title="Batch 
scanning over the !METADATA table can cause issues">ACCUMULO-335</a> Can't 
batchscan over the !METADATA table</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-391"; 
title="Multi-table input format">ACCUMULO-391</a> Added support for reading 
from multiple tables in a Map Reduce job.</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1018"; 
title="Client does not give informative message when user can not read 
table">ACCUMULO-1018</a> Client does not give informative message when user can 
not read table</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1492"; 
title="bin/accumulo should follow symbolic links">ACCUMULO-1492</a> 
bin/accumulo should follow symbolic links</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1572"; 
title="Single node zookeeper failure kills connected accumulo 
servers">ACCUMULO-1572</a> Single node zookeeper failure kills connected 
Accumulo servers</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1661"; 
title="AccumuloInputFormat cannot fetch empty column family">ACCUMULO-1661</a> 
AccumuloInputFormat cannot fetch empty column family</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1696"; title="Deep 
copy in the compaction scope iterators can throw off the 
stats">ACCUMULO-1696</a> Deep copy in the compaction scope iterators can throw 
off the stats</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1698"; 
title="stop-here doesn't consider system hostname">ACCUMULO-1698</a> stop-here 
doesn't consider system hostname</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1901"; 
title="start-here.sh starts only one GC process even if more are 
defined">ACCUMULO-1901</a> start-here.sh starts only one GC process even if 
more are defined</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1920"; 
title="monitor not seeing zookeeper updates">ACCUMULO-1920</a> Monitor was not 
seeing zookeeper updates for tables</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1994"; title="Proxy 
does not handle Key timestamps correctly">ACCUMULO-1994</a> Proxy does not 
handle Key timestamps correctly</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2037"; 
title="Tablets not assigned to last location">ACCUMULO-2037</a> Tablets are now 
assigned to the last location </li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2174"; title="VFS 
Classloader has potential to collide localized resources">ACCUMULO-2174</a> VFS 
Classloader has potential to collide localized resources</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2225"; title="Need 
to better handle DNS failure propagation from Hadoop">ACCUMULO-2225</a> Need to 
better handle DNS failure propagation from Hadoop</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2234"; 
title="Cannot run offline mapreduce over non-default instance.dfs.dir 
value">ACCUMULO-2234</a> Cannot run offline mapreduce over non-default 
instance.dfs.dir value</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2261"; 
title="duplicate locations">ACCUMULO-2261</a> Duplicate locations for a 
Tablet.</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2334"; 
title="Lacking fallback when ACCUMULO_LOG_HOST isn't set">ACCUMULO-2334</a> 
Lacking fallback when ACCUMULO_LOG_HOST isn't set</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2408"; 
title="metadata table not assigned after root table is 
loaded">ACCUMULO-2408</a> metadata table not assigned after root table is 
loaded</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2519"; title="FATE 
operation failed across upgrade">ACCUMULO-2519</a> FATE operation failed across 
upgrade</li>
 </ul>
 <h2 id="known-issues">Known Issues</h2>
 <h3 id="slower-writes-than-previous-accumulo-versions">Slower writes than 
previous Accumulo versions</h3>
@@ -333,47 +333,47 @@ the value of the tserver.mutation.queue.
 the number of concurrent writers to that TabletServer. For example, a value of 
4M with
 50 concurrent writers would equate to approximately 200M of Java heap being 
used for
 mutation queues.</p>
-<p>For more information, see [ACCUMULO-1950][ACCUMULO-1950] and [this 
comment][ACCUMULO-1905-comment].</p>
+<p>For more information, see <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-1950"; title="Reduce the 
number of calls to hsync">ACCUMULO-1950</a> and <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-1905?focusedCommentId=13915208&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13915208";>this
 comment</a>.</p>
 <p>Another possible cause of slower writes is the change in write ahead log 
replication 
 between 1.4 and 1.5.  Accumulo 1.4. defaulted to two loggers servers.  
Accumulo 1.5 and 1.6 store 
 write ahead logs in HDFS and default to using three datanodes.  </p>
 <h3 id="batchwriter-hold-time-error">BatchWriter hold time error</h3>
 <p>If a <code>BatchWriter</code> fails with 
<code>MutationsRejectedException</code> and the  message contains
-<code>"# server errors 1"</code> then it may be 
[ACCUMULO-2388][ACCUMULO-2388].  To confirm this look in the tablet server logs 
+<code>"# server errors 1"</code> then it may be <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-2388";>ACCUMULO-2388</a>.  
To confirm this look in the tablet server logs 
 for <code>org.apache.accumulo.tserver.HoldTimeoutException</code> around the 
time the <code>BatchWriter</code> failed.
 If this is happening often a possible work around is to set 
<code>general.rpc.timeout</code> to <code>240s</code>.    </p>
 <h3 id="other-known-issues">Other known issues</h3>
 <ul>
-<li>[ACCUMULO-981][ACCUMULO-981] Sorted write ahead logs are not 
encrypted.</li>
-<li>[ACCUMULO-1507][ACCUMULO-1507] Dynamic Classloader still can't keep proper 
track of jars</li>
-<li>[ACCUMULO-1588][ACCUMULO-1588] Monitor XML and JSON differ</li>
-<li>[ACCUMULO-1628][ACCUMULO-1628] NPE on deep copied dumped memory 
iterator</li>
-<li>[ACCUMULO-1708][ACCUMULO-1708] [ACCUMULO-2495][ACCUMULO-2495] Out of 
memory errors do not always kill tservers leading to unexpected behavior</li>
-<li>[ACCUMULO-2008][ACCUMULO-2008] Block cache reserves section for in-memory 
blocks</li>
-<li>[ACCUMULO-2059][ACCUMULO-2059] Namespace constraints easily get clobbered 
by table constraints</li>
-<li>[ACCUMULO-2677][ACCUMULO-2677] Tserver failure during map reduce reading 
from table can cause sub-optimal performance</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-981"; 
title="support pluggable encryption when recovering write-ahead 
logs">ACCUMULO-981</a> Sorted write ahead logs are not encrypted.</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1507"; 
title="Dynamic Classloader still can't keep proper track of 
jars">ACCUMULO-1507</a> Dynamic Classloader still can't keep proper track of 
jars</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1588"; 
title="Monitor XML and JSON differ">ACCUMULO-1588</a> Monitor XML and JSON 
differ</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1628"; title="NPE 
on deep copied dumped memory iterator">ACCUMULO-1628</a> NPE on deep copied 
dumped memory iterator</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1708"; title="Error 
during minor compaction left tserver in bad state">ACCUMULO-1708</a> <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-2495"; title="OOM exception 
didn't bring down tserver">ACCUMULO-2495</a> Out of memory errors do not always 
kill tservers leading to unexpected behavior</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2008"; title="Block 
cache reserves section for in-memory blocks">ACCUMULO-2008</a> Block cache 
reserves section for in-memory blocks</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2059"; 
title="Namespace constraints easily get clobbered by table 
constraints">ACCUMULO-2059</a> Namespace constraints easily get clobbered by 
table constraints</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2677"; 
title="Single node bottle neck during map reduce">ACCUMULO-2677</a> Tserver 
failure during map reduce reading from table can cause sub-optimal 
performance</li>
 </ul>
 <h2 id="documentation-updates">Documentation updates</h2>
 <ul>
-<li>[ACCUMULO-1218][ACCUMULO-1218] document the recovery from a failed 
zookeeper</li>
-<li>[ACCUMULO-1375][ACCUMULO-1375] Update README files in proxy module.</li>
-<li>[ACCUMULO-1407][ACCUMULO-1407] Fix documentation for deleterows</li>
-<li>[ACCUMULO-1428][ACCUMULO-1428] Document native maps</li>
-<li>[ACCUMULO-1946][ACCUMULO-1946] Include dfs.datanode.synconclose in hdfs 
configuration documentation</li>
-<li>[ACCUMULO-1956][ACCUMULO-1956] Add section on decomissioning or adding 
nodes to an Accumulo cluster</li>
-<li>[ACCUMULO-2441][ACCUMULO-2441] Document internal state stored in RFile 
names</li>
-<li>[ACCUMULO-2590][ACCUMULO-2590] Update public API in readme to clarify 
what's included</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1218"; 
title="document the recovery from a failed zookeeper">ACCUMULO-1218</a> 
document the recovery from a failed zookeeper</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1375"; 
title="Update README files in proxy module.">ACCUMULO-1375</a> Update README 
files in proxy module.</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1407"; title="Fix 
documentation for deleterows">ACCUMULO-1407</a> Fix documentation for 
deleterows</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1428"; 
title="Document native maps">ACCUMULO-1428</a> Document native maps</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1946"; 
title="Include dfs.datanode.synconclose in hdfs configuration 
documentation">ACCUMULO-1946</a> Include dfs.datanode.synconclose in hdfs 
configuration documentation</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-1956"; title="Add 
section on decomissioning or adding nodes to an Accumulo 
cluster">ACCUMULO-1956</a> Add section on decomissioning or adding nodes to an 
Accumulo cluster</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2441"; 
title="Document internal state stored in RFile names">ACCUMULO-2441</a> 
Document internal state stored in RFile names</li>
+<li><a href="https://issues.apache.org/jira/browse/ACCUMULO-2590"; 
title="Update public API in readme to clarify what's 
included">ACCUMULO-2590</a> Update public API in readme to clarify what's 
included</li>
 </ul>
 <h2 id="api-changes">API Changes</h2>
-<p>The following deprecated methods were removed in 
[ACCUMULO-1533][ACCUMULO-1533]</p>
+<p>The following deprecated methods were removed in <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-1533";>ACCUMULO-1533</a></p>
 <ul>
-<li>Many map reduce methods deprecated in [ACCUMULO-769][ACCUMULO-769] were 
removed </li>
-<li><code>SecurityErrorCode 
o.a.a.core.client.AccumuloSecurityException.getErrorCode()</code> 
<em>deprecated in [ACCUMULO-970][ACCUMULO-970]</em></li>
-<li><code>Connector o.a.a.core.client.Instance.getConnector(AuthInfo)</code> 
<em>deprecated in [ACCUMULO-1024][ACCUMULO-1024]</em></li>
-<li><code>Connector 
o.a.a.core.client.ZooKeeperInstance.getConnector(AuthInfo)</code> 
<em>deprecated in [ACCUMULO-1024][ACCUMULO-1024]</em></li>
-<li><code>static String 
o.a.a.core.client.ZooKeeperInstance.getInstanceIDFromHdfs(Path)</code> 
<em>deprecated in [ACCUMULO-1][ACCUMULO-1]</em></li>
-<li><code>static String ZooKeeperInstance.lookupInstanceName 
(ZooCache,UUID)</code> <em>deprecated in [ACCUMULO-765][ACCUMULO-765]</em></li>
-<li><code>void o.a.a.core.client.ColumnUpdate.setSystemTimestamp(long)</code>  
<em>deprecated in [ACCUMULO-786][ACCUMULO-786]</em></li>
+<li>Many map reduce methods deprecated in <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-769";>ACCUMULO-769</a> were 
removed </li>
+<li><code>SecurityErrorCode 
o.a.a.core.client.AccumuloSecurityException.getErrorCode()</code> 
<em>deprecated in <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-970";>ACCUMULO-970</a></em></li>
+<li><code>Connector o.a.a.core.client.Instance.getConnector(AuthInfo)</code> 
<em>deprecated in <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-1024";>ACCUMULO-1024</a></em></li>
+<li><code>Connector 
o.a.a.core.client.ZooKeeperInstance.getConnector(AuthInfo)</code> 
<em>deprecated in <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-1024";>ACCUMULO-1024</a></em></li>
+<li><code>static String 
o.a.a.core.client.ZooKeeperInstance.getInstanceIDFromHdfs(Path)</code> 
<em>deprecated in <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-1";>ACCUMULO-1</a></em></li>
+<li><code>static String ZooKeeperInstance.lookupInstanceName 
(ZooCache,UUID)</code> <em>deprecated in <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-765";>ACCUMULO-765</a></em></li>
+<li><code>void o.a.a.core.client.ColumnUpdate.setSystemTimestamp(long)</code>  
<em>deprecated in <a 
href="https://issues.apache.org/jira/browse/ACCUMULO-786";>ACCUMULO-786</a></em></li>
 </ul>
 <h2 id="testing">Testing</h2>
 <p>Below is a list of all platforms that 1.6.0 was tested against by 
developers. Each Apache Accumulo release
@@ -388,27 +388,12 @@ has a set of tests that must be run befo
 <p>Each unit and functional test only runs on a single node, while the 
RandomWalk and Continuous Ingest tests run 
 on any number of nodes. <em>Agitation</em> refers to randomly restarting 
Accumulo processes and Hadoop Datanode processes,
 and, in HDFS High-Availability instances, forcing NameNode failover.</p>
-<table id="release_notes_testing">
-  <tr>
-    <th>Testing acronym</th>
-    <th>Meaning</th>
-  </tr>
-  <tr>
-    <td>CI</td>
-    <td>Continuous Ingest</td>
-  </tr>
-  <tr>
-    <td>RW</td>
-    <td>Random walk</td>
-  </tr>
-  <tr>
-    <td>HA</td>
-    <td>High-Availability</td>
-  </tr>
-</table>
-
-<p>
-
+<p>The following acronyms are used in the test testing table.</p>
+<ul>
+<li>CI : Continuous Ingest</li>
+<li>RW : Random Walk</li>
+<li>HA : High-Availability</li>
+</ul>
 <table id="release_notes_testing">
   <tr>
     <th>OS</th>
@@ -458,91 +443,9 @@ and, in HDFS High-Availability instances
     <td>3.4.5</td>
     <td>No</td>
     <td>1.6.0 RC5</td>
-    <td>All unit and integration tests. 2B entries ingested/verified with 
Continuous Ingest </td>
+    <td>All unit and integration tests. 2B entries ingested/verified with CI 
</td>
   </tr>
 </table>
-
-[ACCUMULO-1]: https://issues.apache.org/jira/browse/ACCUMULO-1
-[ACCUMULO-112]: https://issues.apache.org/jira/browse/ACCUMULO-112 "Partition 
data in memory by locality group"
-[ACCUMULO-118]: https://issues.apache.org/jira/browse/ACCUMULO-118 "Multiple 
namenode support"
-[ACCUMULO-324]: https://issues.apache.org/jira/browse/ACCUMULO-324 
"System/site constraints and iterators should NOT affect the METADATA table"
-[ACCUMULO-335]: https://issues.apache.org/jira/browse/ACCUMULO-335 "Batch 
scanning over the !METADATA table can cause issues"
-[ACCUMULO-391]: https://issues.apache.org/jira/browse/ACCUMULO-391 
"Multi-table input format"
-[ACCUMULO-765]: https://issues.apache.org/jira/browse/ACCUMULO-765
-[ACCUMULO-769]: https://issues.apache.org/jira/browse/ACCUMULO-769
-[ACCUMULO-786]: https://issues.apache.org/jira/browse/ACCUMULO-786
-[ACCUMULO-802]: https://issues.apache.org/jira/browse/ACCUMULO-802 "Table 
namespaces"
-[ACCUMULO-842]: https://issues.apache.org/jira/browse/ACCUMULO-842 "Add FATE 
administration to shell"
-[ACCUMULO-958]: https://issues.apache.org/jira/browse/ACCUMULO-958 "Support 
pluggable encryption in walogs"
-[ACCUMULO-970]: https://issues.apache.org/jira/browse/ACCUMULO-970
-[ACCUMULO-980]: https://issues.apache.org/jira/browse/ACCUMULO-980 "Support 
pluggable codecs for RFile"
-[ACCUMULO-981]: https://issues.apache.org/jira/browse/ACCUMULO-981 "support 
pluggable encryption when recovering write-ahead logs"
-[ACCUMULO-1000]: https://issues.apache.org/jira/browse/ACCUMULO-1000 
"Conditional Mutations"
-[ACCUMULO-1009]: https://issues.apache.org/jira/browse/ACCUMULO-1009 "Support 
encryption over the wire"
-[ACCUMULO-1018]: https://issues.apache.org/jira/browse/ACCUMULO-1018 "Client 
does not give informative message when user can not read table"
-[ACCUMULO-1024]: https://issues.apache.org/jira/browse/ACCUMULO-1024
-[ACCUMULO-1030]: https://issues.apache.org/jira/browse/ACCUMULO-1030 "Create a 
Maven plugin to run MiniAccumuloCluster for integration testing"
-[ACCUMULO-1218]: https://issues.apache.org/jira/browse/ACCUMULO-1218 "document 
the recovery from a failed zookeeper"
-[ACCUMULO-1336]: https://issues.apache.org/jira/browse/ACCUMULO-1336 "Add 
lexicoders from Typo to Accumulo"
-[ACCUMULO-1375]: https://issues.apache.org/jira/browse/ACCUMULO-1375 "Update 
README files in proxy module."
-[ACCUMULO-1407]: https://issues.apache.org/jira/browse/ACCUMULO-1407 "Fix 
documentation for deleterows"
-[ACCUMULO-1428]: https://issues.apache.org/jira/browse/ACCUMULO-1428 "Document 
native maps"
-[ACCUMULO-1442]: https://issues.apache.org/jira/browse/ACCUMULO-1442 "Replace 
JLine with JLine2"
-[ACCUMULO-1451]: https://issues.apache.org/jira/browse/ACCUMULO-1451 "Make 
Compaction triggers extensible"
-[ACCUMULO-1481]: https://issues.apache.org/jira/browse/ACCUMULO-1481 "Root 
tablet in its own table"
-[ACCUMULO-1492]: https://issues.apache.org/jira/browse/ACCUMULO-1492 
"bin/accumulo should follow symbolic links"
-[ACCUMULO-1507]: https://issues.apache.org/jira/browse/ACCUMULO-1507 "Dynamic 
Classloader still can't keep proper track of jars"
-[ACCUMULO-1533]: https://issues.apache.org/jira/browse/ACCUMULO-1533
-[ACCUMULO-1585]: https://issues.apache.org/jira/browse/ACCUMULO-1585 "Use node 
addresses from config files verbatim"
-[ACCUMULO-1562]: https://issues.apache.org/jira/browse/ACCUMULO-1562 "add a 
troubleshooting section to the user guide"
-[ACCUMULO-1566]: https://issues.apache.org/jira/browse/ACCUMULO-1566 "Add 
ability for client to start Scanner readahead immediately"
-[ACCUMULO-1572]: https://issues.apache.org/jira/browse/ACCUMULO-1572 "Single 
node zookeeper failure kills connected accumulo servers"
-[ACCUMULO-1585]: https://issues.apache.org/jira/browse/ACCUMULO-1585 "Use 
FQDN/verbatim data from config files"
-[ACCUMULO-1588]: https://issues.apache.org/jira/browse/ACCUMULO-1588 "Monitor 
XML and JSON differ"
-[ACCUMULO-1628]: https://issues.apache.org/jira/browse/ACCUMULO-1628 "NPE on 
deep copied dumped memory iterator"
-[ACCUMULO-1661]: https://issues.apache.org/jira/browse/ACCUMULO-1661 
"AccumuloInputFormat cannot fetch empty column family"
-[ACCUMULO-1664]: https://issues.apache.org/jira/browse/ACCUMULO-1664 "Make all 
processes able to use random ports"
-[ACCUMULO-1667]: https://issues.apache.org/jira/browse/ACCUMULO-1667 "Allow 
On/Offline Command To Execute Synchronously"
-[ACCUMULO-1696]: https://issues.apache.org/jira/browse/ACCUMULO-1696 "Deep 
copy in the compaction scope iterators can throw off the stats"
-[ACCUMULO-1698]: https://issues.apache.org/jira/browse/ACCUMULO-1698 
"stop-here doesn't consider system hostname"
-[ACCUMULO-1704]: https://issues.apache.org/jira/browse/ACCUMULO-1704 
"IteratorSetting missing (int,String,Class,Map) constructor"
-[ACCUMULO-1708]: https://issues.apache.org/jira/browse/ACCUMULO-1708 "Error 
during minor compaction left tserver in bad state"
-[ACCUMULO-1808]: https://issues.apache.org/jira/browse/ACCUMULO-1808 "Create 
compaction strategy that has size limit"
-[ACCUMULO-1833]: https://issues.apache.org/jira/browse/ACCUMULO-1833 
"MultiTableBatchWriterImpl.getBatchWriter() is not performant for multiple 
threads"
-[ACCUMULO-1901]: https://issues.apache.org/jira/browse/ACCUMULO-1901 
"start-here.sh starts only one GC process even if more are defined"
-[ACCUMULO-1905-comment]: 
https://issues.apache.org/jira/browse/ACCUMULO-1905?focusedCommentId=13915208&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13915208
-[ACCUMULO-1920]: https://issues.apache.org/jira/browse/ACCUMULO-1920 "monitor 
not seeing zookeeper updates"
-[ACCUMULO-1933]: https://issues.apache.org/jira/browse/ACCUMULO-1933 "Make 
unit on memory parameters case-insensitive"
-[ACCUMULO-1946]: https://issues.apache.org/jira/browse/ACCUMULO-1946 "Include 
dfs.datanode.synconclose in hdfs configuration documentation"
-[ACCUMULO-1950]: https://issues.apache.org/jira/browse/ACCUMULO-1950 "Reduce 
the number of calls to hsync"
-[ACCUMULO-1956]: https://issues.apache.org/jira/browse/ACCUMULO-1956 "Add 
section on decomissioning or adding nodes to an Accumulo cluster"
-[ACCUMULO-1958]: https://issues.apache.org/jira/browse/ACCUMULO-1958 "Range 
constructor lacks key checks, should be non-public"
-[ACCUMULO-1985]: https://issues.apache.org/jira/browse/ACCUMULO-1985 "Cannot 
bind monitor on remote host to all interfaces"
-[ACCUMULO-1994]: https://issues.apache.org/jira/browse/ACCUMULO-1994 "Proxy 
does not handle Key timestamps correctly"
-[ACCUMULO-2008]: https://issues.apache.org/jira/browse/ACCUMULO-2008 "Block 
cache reserves section for in-memory blocks"
-[ACCUMULO-2037]: https://issues.apache.org/jira/browse/ACCUMULO-2037 "Tablets 
not assigned to last location"
-[ACCUMULO-2047]: https://issues.apache.org/jira/browse/ACCUMULO-2047 "Failures 
using viewfs with multiple namenodes"
-[ACCUMULO-2059]: https://issues.apache.org/jira/browse/ACCUMULO-2059 
"Namespace constraints easily get clobbered by table constraints"
-[ACCUMULO-2128]: https://issues.apache.org/jira/browse/ACCUMULO-2128 "Provide 
resource cleanup via static utility rather than Instance.close"
-[ACCUMULO-2174]: https://issues.apache.org/jira/browse/ACCUMULO-2174 "VFS 
Classloader has potential to collide localized resources"
-[ACCUMULO-2225]: https://issues.apache.org/jira/browse/ACCUMULO-2225 "Need to 
better handle DNS failure propagation from Hadoop"
-[ACCUMULO-2234]: https://issues.apache.org/jira/browse/ACCUMULO-2234 "Cannot 
run offline mapreduce over non-default instance.dfs.dir value"
-[ACCUMULO-2261]: https://issues.apache.org/jira/browse/ACCUMULO-2261 
"duplicate locations"
-[ACCUMULO-2262]: https://issues.apache.org/jira/browse/ACCUMULO-2262 "Include 
java.net.preferIPv4Stack=true in process startup"
-[ACCUMULO-2334]: https://issues.apache.org/jira/browse/ACCUMULO-2334 "Lacking 
fallback when ACCUMULO_LOG_HOST isn't set"
-[ACCUMULO-2360]: https://issues.apache.org/jira/browse/ACCUMULO-2360 "Need a 
way to configure TNonblockingServer.maxReadBufferBytes to prevent OOMs"
-[ACCUMULO-2388]: https://issues.apache.org/jira/browse/ACCUMULO-2388
-[ACCUMULO-2408]: https://issues.apache.org/jira/browse/ACCUMULO-2408 "metadata 
table not assigned after root table is loaded"
-[ACCUMULO-2441]: https://issues.apache.org/jira/browse/ACCUMULO-2441 "Document 
internal state stored in RFile names"
-[ACCUMULO-2495]: https://issues.apache.org/jira/browse/ACCUMULO-2495 "OOM 
exception didn't bring down tserver"
-[ACCUMULO-2519]: https://issues.apache.org/jira/browse/ACCUMULO-2519 "FATE 
operation failed across upgrade"
-[ACCUMULO-2590]: https://issues.apache.org/jira/browse/ACCUMULO-2590 "Update 
public API in readme to clarify what's included"
-[ACCUMULO-2659]: https://issues.apache.org/jira/browse/ACCUMULO-2659
-[ACCUMULO-2677]: https://issues.apache.org/jira/browse/ACCUMULO-2677 "Single 
node bottle neck during map reduce"
-
-  [1]: http://research.google.com/archive/bigtable.html
-  [2]: 
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.44.2782&rep=rep1&type=pdf
-  [3]: http://wiki.apache.org/hadoop/HadoopIPv6
   </div>
 
   <div id="footer">


Reply via email to