http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/data-grid/persistent-store.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/persistent-store.md 
b/wiki/documentation/data-grid/persistent-store.md
new file mode 100755
index 0000000..bbd3fbc
--- /dev/null
+++ b/wiki/documentation/data-grid/persistent-store.md
@@ -0,0 +1,129 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+JCache specification comes with APIs for 
[javax.cache.inegration.CacheLoader](https://ignite.incubator.apache.org/jcache/1.0.0/javadoc/javax/cache/integration/CacheLoader.html)
 and 
[javax.cache.inegration.CacheWriter](https://ignite.incubator.apache.org/jcache/1.0.0/javadoc/javax/cache/integration/CacheWriter.html)
 which are used for **write-through** and **read-through** to and from an 
underlying persistent storage respectively (e.g. an RDBMS database like Oracle 
or MySQL, or NoSQL database like MongoDB or Couchbase).
+
+While Ignite allows you to configure the `CacheLoader` and `CacheWriter` 
separately, it is very awkward to implement a transactional store within 2 
separate classes, as multiple `load` and `put` operations have to share the 
same connection within the same transaction. To mitigate that, Ignite provides 
`org.apache.ignite.cache.store.CacheStore` interface which extends both, 
`CacheLoader` and `CacheWriter`. 
+[block:callout]
+{
+  "type": "info",
+  "title": "Transactions",
+  "body": "`CacheStore` is fully transactional and automatically merges into 
the ongoing cache transaction."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "CacheStore"
+}
+[/block]
+`CacheStore` interface in Ignite is used to write and load data to and from 
the underlying data store. In addition to standard JCache loading and storing 
methods, it also introduces end-of-transaction demarcation and ability to bulk 
load a cache from the underlying data store.
+
+## loadCache()
+`CacheStore.loadCache()` method allows for cache loading even without passing 
all the keys that need to be loaded. It is generally used for hot-loading the 
cache on startup, but can be also called at any point after the cache has been 
started.
+
+`IgniteCache.loadCache()` method will delegate to `CacheStore.loadCache()` 
method on every cluster member that is running the cache. To invoke loading 
only on the local cluster node, use `IgniteCache.localLoadCache()` method.
+[block:callout]
+{
+  "type": "info",
+  "body": "In case of partitioned caches, keys that are not mapped to this 
node, either as primary or backups, will be automatically discarded by the 
cache."
+}
+[/block]
+## load(), write(), delete()
+Methods `load()`, `write()`, and `delete()` on the `CacheStore` are called 
whenever methods `get()`, `put()`, and `remove()` are called correspondingly on 
the `IgniteCache` interface. These methods are used to enable **read-through** 
and **write-through** behavior when working with individual cache entries.
+
+## loadAll(), writeAll(), deleteAll()
+Methods `loadAll()`, `writeAll()`, and `deleteAll()` on the `CacheStore` are 
called whenever methods `getAll()`, `putAll()`, and `removeAll()` are called 
correspondingly on the `IgniteCache` interface. These methods are used to 
enable **read-through** and **write-through** behavior when working with 
multiple cache entries and should generally be implemented using batch 
operations to provide better performance.
+[block:callout]
+{
+  "type": "info",
+  "title": "",
+  "body": "`CacheStoreAdapter` provides default implementation for 
`loadAll()`, `writeAll()`, and `deleteAll()` methods which simply iterates 
through all keys one by one."
+}
+[/block]
+## sessionEnd()
+Ignite has a concept of store session which may span more than one cache store 
operation. Sessions are especially useful when working with transactions.
+
+In case of `ATOMIC` caches, method `sessionEnd()` is called after completion 
of each `CacheStore` method. In case of `TRANSACTIONAL` caches, `sessionEnd()` 
is called at the end of each transaction, which allows to either commit or 
rollback multiple operations on the underlying persistent store.
+[block:callout]
+{
+  "type": "info",
+  "body": "`CacheStoreAdapater` provides default empty implementation of 
`sessionEnd()` method."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "CacheStoreSession"
+}
+[/block]
+The main purpose of cache store session is to hold the context between 
multiple store invocations whenever `CacheStore` is used in a cache 
transaction. For example, if using JDBC, you can store the ongoing database 
connection via `CacheStoreSession.attach()` method. You can then commit this 
connection in the `CacheStore#sessionEnd(boolean)` method.
+
+`CacheStoreSession` can be injected into your cache store implementation via 
`@GridCacheStoreSessionResource` annotation.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "CacheStore Example"
+}
+[/block]
+Below are a couple of different possible cache store implementations. Note 
that transactional implementation works with and without transactions.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "public class CacheJdbcPersonStore extends 
CacheStoreAdapter<Long, Person> {\n  // This mehtod is called whenever 
\"get(...)\" methods are called on IgniteCache.\n  @Override public Person 
load(Long key) {\n    try (Connection conn = connection()) {\n      try 
(PreparedStatement st = conn.prepareStatement(\"select * from PERSONS where 
id=?\")) {\n        st.setLong(1, key);\n\n        ResultSet rs = 
st.executeQuery();\n\n        return rs.next() ? new Person(rs.getLong(1), 
rs.getString(2), rs.getString(3)) : null;\n      }\n    }\n    catch 
(SQLException e) {\n      throw new CacheLoaderException(\"Failed to load: \" + 
key, e);\n    }\n  }\n\n  // This mehtod is called whenever \"put(...)\" 
methods are called on IgniteCache.\n  @Override public void 
write(Cache.Entry<Long, Person> entry) {\n    try (Connection conn = 
connection()) {\n      // Syntax of MERGE statement is database specific and 
should be adopted for your database.\n      // If your database does not supp
 ort MERGE statement then use sequentially update, insert statements.\n      
try (PreparedStatement st = conn.prepareStatement(\n        \"merge into 
PERSONS (id, firstName, lastName) key (id) VALUES (?, ?, ?)\")) {\n        for 
(Cache.Entry<Long, Person> entry : entries) {\n          Person val = 
entry.getValue();\n          \n          st.setLong(1, entry.getKey());\n       
   st.setString(2, val.getFirstName());\n          st.setString(3, 
val.getLastName());\n          \n          st.executeUpdate();\n        }\n     
 }\n    }\n    catch (SQLException e) {\n      throw new 
CacheWriterException(\"Failed to write [key=\" + key + \", val=\" + val + ']', 
e);\n    }\n  }\n\n  // This mehtod is called whenever \"remove(...)\" methods 
are called on IgniteCache.\n  @Override public void delete(Object key) {\n    
try (Connection conn = connection()) {\n      try (PreparedStatement st = 
conn.prepareStatement(\"delete from PERSONS where id=?\")) {\n        
st.setLong(1, (Long)key);\n\n      
   st.executeUpdate();\n      }\n    }\n    catch (SQLException e) {\n      
throw new CacheWriterException(\"Failed to delete: \" + key, e);\n    }\n  
}\n\n  // This mehtod is called whenever \"loadCache()\" and 
\"localLoadCache()\"\n  // methods are called on IgniteCache. It is used for 
bulk-loading the cache.\n  // If you don't need to bulk-load the cache, skip 
this method.\n  @Override public void loadCache(IgniteBiInClosure<Long, Person> 
clo, Object... args) {\n    if (args == null || args.length == 0 || args[0] == 
null)\n      throw new CacheLoaderException(\"Expected entry count parameter is 
not provided.\");\n\n    final int entryCnt = (Integer)args[0];\n\n    try 
(Connection conn = connection()) {\n      try (PreparedStatement st = 
conn.prepareStatement(\"select * from PERSONS\")) {\n        try (ResultSet rs 
= st.executeQuery()) {\n          int cnt = 0;\n\n          while (cnt < 
entryCnt && rs.next()) {\n            Person person = new Person(rs.getLong(1), 
rs.getString(2),
  rs.getString(3));\n\n            clo.apply(person.getId(), person);\n\n       
     cnt++;\n          }\n        }\n      }\n    }\n    catch (SQLException e) 
{\n      throw new CacheLoaderException(\"Failed to load values from cache 
store.\", e);\n    }\n  }\n\n  // Open JDBC connection.\n  private Connection 
connection() throws SQLException  {\n    // Open connection to your RDBMS 
systems (Oracle, MySQL, Postgres, DB2, Microsoft SQL, etc.)\n    // In this 
example we use H2 Database for simplification.\n    Connection conn = 
DriverManager.getConnection(\"jdbc:h2:mem:example;DB_CLOSE_DELAY=-1\");\n\n    
conn.setAutoCommit(true);\n\n    return conn;\n  }\n}",
+      "language": "java",
+      "name": "jdbc non-transactional"
+    },
+    {
+      "code": "public class CacheJdbcPersonStore extends 
CacheStoreAdapter<Long, Person> {\n  /** Auto-injected store session. */\n  
@CacheStoreSessionResource\n  private CacheStoreSession ses;\n\n  // Complete 
transaction or simply close connection if there is no transaction.\n  @Override 
public void sessionEnd(boolean commit) {\n    try (Connection conn = 
ses.getAttached()) {\n      if (conn != null && ses.isWithinTransaction()) {\n  
      if (commit)\n          conn.commit();\n        else\n          
conn.rollback();\n      }\n    }\n    catch (SQLException e) {\n      throw new 
CacheWriterException(\"Failed to end store session.\", e);\n    }\n  }\n\n  // 
This mehtod is called whenever \"get(...)\" methods are called on 
IgniteCache.\n  @Override public Person load(Long key) {\n    try (Connection 
conn = connection()) {\n      try (PreparedStatement st = 
conn.prepareStatement(\"select * from PERSONS where id=?\")) {\n        
st.setLong(1, key);\n\n        ResultSet rs = st.execut
 eQuery();\n\n        return rs.next() ? new Person(rs.getLong(1), 
rs.getString(2), rs.getString(3)) : null;\n      }\n    }\n    catch 
(SQLException e) {\n      throw new CacheLoaderException(\"Failed to load: \" + 
key, e);\n    }\n  }\n\n  // This mehtod is called whenever \"put(...)\" 
methods are called on IgniteCache.\n  @Override public void 
write(Cache.Entry<Long, Person> entry) {\n    try (Connection conn = 
connection()) {\n      // Syntax of MERGE statement is database specific and 
should be adopted for your database.\n      // If your database does not 
support MERGE statement then use sequentially update, insert statements.\n      
try (PreparedStatement st = conn.prepareStatement(\n        \"merge into 
PERSONS (id, firstName, lastName) key (id) VALUES (?, ?, ?)\")) {\n        for 
(Cache.Entry<Long, Person> entry : entries) {\n          Person val = 
entry.getValue();\n          \n          st.setLong(1, entry.getKey());\n       
   st.setString(2, val.getFirstName());\n       
    st.setString(3, val.getLastName());\n          \n          
st.executeUpdate();\n        }\n      }\n    }        \n    catch (SQLException 
e) {\n      throw new CacheWriterException(\"Failed to write [key=\" + key + 
\", val=\" + val + ']', e);\n    }\n  }\n\n  // This mehtod is called whenever 
\"remove(...)\" methods are called on IgniteCache.\n  @Override public void 
delete(Object key) {\n    try (Connection conn = connection()) {\n      try 
(PreparedStatement st = conn.prepareStatement(\"delete from PERSONS where 
id=?\")) {\n        st.setLong(1, (Long)key);\n\n        st.executeUpdate();\n  
    }\n    }\n    catch (SQLException e) {\n      throw new 
CacheWriterException(\"Failed to delete: \" + key, e);\n    }\n  }\n\n  // This 
mehtod is called whenever \"loadCache()\" and \"localLoadCache()\"\n  // 
methods are called on IgniteCache. It is used for bulk-loading the cache.\n  // 
If you don't need to bulk-load the cache, skip this method.\n  @Override public 
void loadCache(Ignit
 eBiInClosure<Long, Person> clo, Object... args) {\n    if (args == null || 
args.length == 0 || args[0] == null)\n      throw new 
CacheLoaderException(\"Expected entry count parameter is not provided.\");\n\n  
  final int entryCnt = (Integer)args[0];\n\n    try (Connection conn = 
connection()) {\n      try (PreparedStatement st = 
conn.prepareStatement(\"select * from PERSONS\")) {\n        try (ResultSet rs 
= st.executeQuery()) {\n          int cnt = 0;\n\n          while (cnt < 
entryCnt && rs.next()) {\n            Person person = new Person(rs.getLong(1), 
rs.getString(2), rs.getString(3));\n\n            clo.apply(person.getId(), 
person);\n\n            cnt++;\n          }\n        }\n      }\n    }\n    
catch (SQLException e) {\n      throw new CacheLoaderException(\"Failed to load 
values from cache store.\", e);\n    }\n  }\n\n  // Opens JDBC connection and 
attaches it to the ongoing\n  // session if within a transaction.\n  private 
Connection connection() throws SQLException  {\
 n    if (ses.isWithinTransaction()) {\n      Connection conn = 
ses.getAttached();\n\n      if (conn == null) {\n        conn = 
openConnection(false);\n\n        // Store connection in the session, so it can 
be accessed\n        // for other operations within the same transaction.\n     
   ses.attach(conn);\n      }\n\n      return conn;\n    }\n    // Transaction 
can be null in case of simple load or put operation.\n    else\n      return 
openConnection(true);\n  }\n\n  // Opens JDBC connection.\n  private Connection 
openConnection(boolean autocommit) throws SQLException {\n    // Open 
connection to your RDBMS systems (Oracle, MySQL, Postgres, DB2, Microsoft SQL, 
etc.)\n    // In this example we use H2 Database for simplification.\n    
Connection conn = 
DriverManager.getConnection(\"jdbc:h2:mem:example;DB_CLOSE_DELAY=-1\");\n\n    
conn.setAutoCommit(autocommit);\n\n    return conn;\n  }\n}",
+      "language": "java",
+      "name": "jdbc transactional"
+    },
+    {
+      "code": "public class CacheJdbcPersonStore extends CacheStore<Long, 
Person> {\n  // Skip single operations and open connection methods.\n  // You 
can copy them from jdbc non-transactional or jdbc transactional examples.\n  
...\n  \n  // This mehtod is called whenever \"getAll(...)\" methods are called 
on IgniteCache.\n  @Override public Map<K, V> loadAll(Iterable<Long> keys) {\n  
  try (Connection conn = connection()) {\n      try (PreparedStatement st = 
conn.prepareStatement(\n        \"select firstName, lastName from PERSONS where 
id=?\")) {\n        Map<K, V> loaded = new HashMap<>();\n        \n        for 
(Long key : keys) {\n          st.setLong(1, key);\n          \n          
try(ResultSet rs = st.executeQuery()) {\n            if (rs.next())\n           
   loaded.put(key, new Person(key, rs.getString(1), rs.getString(2));\n         
 }\n        }\n\n        return loaded;\n      }\n    }\n    catch 
(SQLException e) {\n      throw new CacheLoaderException(\"Failed to loa
 dAll: \" + keys, e);\n    }\n  }\n  \n  // This mehtod is called whenever 
\"putAll(...)\" methods are called on IgniteCache.\n  @Override public void 
writeAll(Collection<Cache.Entry<Long, Person>> entries) {\n    try (Connection 
conn = connection()) {\n      // Syntax of MERGE statement is database specific 
and should be adopted for your database.\n      // If your database does not 
support MERGE statement then use sequentially update, insert statements.\n      
try (PreparedStatement st = conn.prepareStatement(\n        \"merge into 
PERSONS (id, firstName, lastName) key (id) VALUES (?, ?, ?)\")) {\n        for 
(Cache.Entry<Long, Person> entry : entries) {\n          Person val = 
entry.getValue();\n          \n          st.setLong(1, entry.getKey());\n       
   st.setString(2, val.getFirstName());\n          st.setString(3, 
val.getLastName());\n          \n          st.addBatch();\n        }\n        
\n\t\t\t\tst.executeBatch();\n      }\n    }\n    catch (SQLException e) {\n    
  th
 row new CacheWriterException(\"Failed to writeAll: \" + entries, e);\n    }\n  
}\n  \n  // This mehtod is called whenever \"removeAll(...)\" methods are 
called on IgniteCache.\n  @Override public void deleteAll(Collection<Long> 
keys) {\n    try (Connection conn = connection()) {\n      try 
(PreparedStatement st = conn.prepareStatement(\"delete from PERSONS where 
id=?\")) {\n        for (Long key : keys) {\n          st.setLong(1, key);\n    
      \n          st.addBatch();\n        }\n        
\n\t\t\t\tst.executeBatch();\n      }\n    }\n    catch (SQLException e) {\n    
  throw new CacheWriterException(\"Failed to deleteAll: \" + keys, e);\n    }\n 
 }\n}",
+      "language": "java",
+      "name": "jdbc bulk operations"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+`CacheStore` interface can be set on `IgniteConfiguration` via a `Factory` in 
much the same way like `CacheLoader` and `CacheWriter` are being set.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n    
<property name=\"cacheConfiguration\">\n      <list>\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          ...\n  
        <property name=\"cacheStoreFactory\">\n            <bean 
class=\"javax.cache.configuration.FactoryBuilder$SingletonFactory\">\n          
    <constructor-arg>\n                <bean class=\"foo.bar.MyPersonStore\">\n 
   \t\t\t\t\t\t\t...\n    \t\t\t\t\t\t</bean>\n    
\t\t\t\t\t</constructor-arg>\n    \t\t\t\t</bean>\n\t    \t\t</property>\n    
\t\t\t...\n    \t\t</bean>\n    \t</list>\n    </property>\n  ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "IgniteConfiguration cfg = new 
IgniteConfiguration();\n\nCacheConfiguration<Long, Person> cacheCfg = new 
CacheConfiguration<>();\n\nCacheStore<Long, Person> store;\n\nstore = new 
MyPersonStore();\n\ncacheCfg.setCacheStoreFactory(new 
FactoryBuilder.SingletonFactory<>(store));\ncacheCfg.setReadThrough(true);\ncacheCfg.setWriteThrough(true);\n\ncfg.setCacheConfiguration(cacheCfg);\n\n//
 Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/data-grid/rebalancing.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/rebalancing.md 
b/wiki/documentation/data-grid/rebalancing.md
new file mode 100755
index 0000000..fbe47cc
--- /dev/null
+++ b/wiki/documentation/data-grid/rebalancing.md
@@ -0,0 +1,123 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+When a new node joins topology, existing nodes relinquish primary or back up 
ownership of some keys to the new node so that keys remain equally balanced 
across the grid at all times.
+
+If the new node becomes a primary or backup for some partition, it will fetch 
data from previous primary node for that partition or from one of the backup 
nodes for that partition. Once a partition is fully loaded to the new node, it 
will be marked obsolete on the old node and will be eventually evicted after 
all current transactions on that node are finished. Hence, for some short 
period of time, after topology changes, there can be a case when a cache will 
have more backup copies for a key than configured. However once rebalancing 
completes, extra backup copies will be removed from node caches.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Preload Modes"
+}
+[/block]
+Following preload modes are defined in `CachePreloadMode` enum.
+[block:parameters]
+{
+  "data": {
+    "0-0": "`SYNC`",
+    "h-0": "CachePreloadMode",
+    "h-1": "Description",
+    "0-1": "Synchronous rebalancing mode. Distributed caches will not start 
until all necessary data is loaded from other available grid nodes. This means 
that any call to cache public API will be blocked until rebalancing is 
finished.",
+    "1-1": "Asynchronous rebalancing mode. Distributed caches will start 
immediately and will load all necessary data from other available grid nodes in 
the background.",
+    "1-0": "`ASYNC`",
+    "2-1": "In this mode no rebalancing will take place which means that 
caches will be either loaded on demand from persistent store whenever data is 
accessed, or will be populated explicitly.",
+    "2-0": "`NONE`"
+  },
+  "cols": 2,
+  "rows": 3
+}
+[/block]
+By default, `ASYNC` preload mode is enabled. To use another mode, you can set 
the `preloadMode` property of `CacheConfiguration`, like so:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">          \t\t\n   
       \t<!-- Set synchronous preloading. -->\n    \t\t\t\t<property 
name=\"preloadMode\" value=\"SYNC\"/>\n            ... \n        </bean\n    
</property>\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setPreloadMode(CachePreloadMode.SYNC);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Rebalance Message Throttling"
+}
+[/block]
+When re-balancer transfers data from one node to another, it splits the whole 
data set into batches and sends each batch in a separate message. If your data 
sets are large and there are a lot of messages to send, the CPU or network can 
get over-consumed. In this case it can be reasonable to wait between rebalance 
messages so that negative performance impact caused by preloading process is 
minimized. This time interval is controlled by `preloadThrottle` configuration 
property of  `CacheConfiguration`. Its default value is 0, which means that 
there will be no pauses between messages. Note that size of a single message 
can be also customized by `preloadBatchSize` configuration property (default 
size is 512K).
+
+For example, if you want preloader to send 2MB of data per message with 100 ms 
throttle interval, you should provide the following configuration: 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">          \t\t\n   
       \t<!-- Set batch size. -->\n    \t\t\t\t<property 
name=\"preloadBatchSize\" value=\"#{2 * 1024 * 1024}\"/>\n \n    \t\t\t\t<!-- 
Set throttle interval. -->\n    \t\t\t\t<property name=\"preloadThrottle\" 
value=\"100\"/>\n            ... \n        </bean\n    </property>\n</bean> ",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setPreloadBatchSize(2 * 1024 * 1024);\n       
     \ncacheCfg.setPreloadThrottle(100);\n\nIgniteConfiguration cfg = new 
IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start 
Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+Cache preloading behavior can be customized by optionally setting the 
following configuration properties:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Setter Method",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-0": "`setPreloadMode`",
+    "0-1": "Preload mode for distributed cache. See Preload Modes section for 
details.",
+    "1-0": "`setPreloadPartitionedDelay`",
+    "1-1": "Preloading delay in milliseconds. See Delayed And Manual 
Preloading section for details.",
+    "2-0": "`setPreloadBatchSize`",
+    "2-1": "Size (in bytes) to be loaded within a single preload message. 
Preloading algorithm will split total data set on every node into multiple 
batches prior to sending data.",
+    "3-0": "`setPreloadThreadPoolSize`",
+    "3-1": "Size of preloading thread pool. Note that size serves as a hint 
and implementation may create more threads for preloading than specified here 
(but never less threads).",
+    "4-0": "`setPreloadThrottle`",
+    "4-1": "Time in milliseconds to wait between preload messages to avoid 
overloading of CPU or network. When preloading large data sets, the CPU or 
network can get over-consumed with preloading messages, which consecutively may 
slow down the application performance. This parameter helps tune the amount of 
time to wait between preload messages to make sure that preloading process does 
not have any negative performance impact. Note that application will continue 
to work properly while preloading is still in progress.",
+    "5-0": "`setPreloadOrder`",
+    "6-0": "`setPreloadTimeout`",
+    "5-1": "Order in which preloading should be done. Preload order can be set 
to non-zero value for caches with SYNC or ASYNC preload modes only. Preloading 
for caches with smaller preload order will be completed first. By default, 
preloading is not ordered.",
+    "6-1": "Preload timeout (ms).",
+    "0-2": "`ASYNC`",
+    "1-2": "0 (no delay)",
+    "2-2": "512K",
+    "3-2": "2",
+    "4-2": "0 (throttling disabled)",
+    "5-2": "0",
+    "6-2": "10000"
+  },
+  "cols": 3,
+  "rows": 7
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/data-grid/transactions.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/transactions.md 
b/wiki/documentation/data-grid/transactions.md
new file mode 100755
index 0000000..a87cb53
--- /dev/null
+++ b/wiki/documentation/data-grid/transactions.md
@@ -0,0 +1,145 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+Ignite supports 2 modes for cache operation, *transactional* and *atomic*. In 
`transactional` mode you are able to group multiple cache operations in a 
transaction, while `atomic` mode supports multiple atomic operations, one at a 
time. `Atomic` mode is more light-weight and generally has better performance 
over `transactional` caches.
+
+However, regardless of which mode you use, as long as your cluster is alive, 
the data between different cluster nodes must remain consistent. This means 
that whichever node is being used to retrieve data, it will never get data that 
has been partially committed or that is inconsistent with other data.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "IgniteTransactions"
+}
+[/block]
+`IgniteTransactions` interface contains functionality for starting and 
completing transactions, as well as subscribing listeners or getting metrics.
+[block:callout]
+{
+  "type": "info",
+  "title": "Cross-Cache Transactions",
+  "body": "You can combine multiple operations from different caches into one 
transaction. Note that this allows to update caches of different types, like 
`REPLICATED` and `PARTITIONED` caches, in one transaction."
+}
+[/block]
+You can obtain an instance of `IgniteTransactions` as follows:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteTransactions 
transactions = ignite.transactions();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+Here is an example of how transactions can be performed in Ignite:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "try (Transaction tx = transactions.txStart()) {\n    Integer 
hello = cache.get(\"Hello\");\n  \n    if (hello == 1)\n        
cache.put(\"Hello\", 11);\n  \n    cache.put(\"World\", 22);\n  \n    
tx.commit();\n}",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Two-Phase-Commit (2PC)"
+}
+[/block]
+Ignite utilizes 2PC protocol for its transactions with many one-phase-commit 
optimizations whenever applicable. Whenever data is updated within a 
transaction, Ignite will keep transactional state in a local transaction map 
until `commit()` is called, at which point, if needed, the data is transferred 
to participating remote nodes.
+
+For more information on how Ignite 2PC works, you can check out these blogs:
+  * [Two-Phase-Commit for Distributed In-Memory 
Caches](http://gridgain.blogspot.com/2014/09/two-phase-commit-for-distributed-in.html)
+  *  [Two-Phase-Commit for In-Memory Caches - Part 
II](http://gridgain.blogspot.com/2014/09/two-phase-commit-for-in-memory-caches.html)
 
+  * [One-Phase-Commit - Fast Transactions For In-Memory 
Caches](http://gridgain.blogspot.com/2014/09/one-phase-commit-fast-transactions-for.html)
 
+[block:callout]
+{
+  "type": "success",
+  "body": "Ignite provides fully ACID (**A**tomicity, **C**onsistency, 
**I**solation, **D**urability) compliant transactions that ensure guaranteed 
consistency.",
+  "title": "ACID Compliance"
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Optimistic and Pessimistic"
+}
+[/block]
+Whenever `TRANSACTIONAL` atomicity mode is configured, Ignite supports 
`OPTIMISTIC` and `PESSIMISTIC` concurrency modes for transactions. The main 
difference is that in `PESSIMISTIC` mode locks are acquired at the time of 
access, while in `OPTIMISTIC` mode locks are acquired during the `commit` phase.
+
+Ignite also supports the following isolation levels:
+  * `READ_COMMITED` - data is always fetched from the primary node, even if it 
already has been accessed within the transaction.
+  * `REPEATABLE_READ` - data is fetched form the primary node only once on 
first access and stored in the local transactional map. All consecutive access 
to the same data is local.
+  * `SERIALIZABLE` - when combined with `OPTIMISTIC` concurrency, transactions 
may throw `TransactionOptimisticException` in case of concurrent updates. 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteTransactions txs = ignite.transactions();\n\n// Start 
transaction in pessimistic mode with repeatable read isolation 
level.\nTransaction tx = txs.txStart(TransactionConcurrency.OPTIMISTIC, 
TransactionIsolation.REPEATABLE_READ);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Atomicity Mode"
+}
+[/block]
+Ignite supports 2 atomicity modes defined in `CacheAtomicityMode` enum:
+  * `TRANSACTIONAL`
+  * `ATOMIC`
+
+`TRANSACTIONAL` mode enables fully ACID-compliant transactions, however, when 
only atomic semantics are needed, it is recommended that  `ATOMIC` mode is used 
for better performance.
+
+`ATOMIC` mode provides better performance by avoiding transactional locks, 
while still providing data atomicity and consistency. Another difference in 
`ATOMIC` mode is that bulk writes, such as `putAll(...)`and `removeAll(...)` 
methods are no longer executed in one transaction and can partially fail. In 
case of partial failure, `CachePartialUpdateException` will be thrown which 
will contain a list of keys for which the update failed.
+[block:callout]
+{
+  "type": "info",
+  "body": "Note that transactions are disabled whenever `ATOMIC` mode is used, 
which allows to achieve much higher performance and throughput in cases when 
transactions are not needed.",
+  "title": "Performance"
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+Atomicity mode is defined in `CacheAtomicityMode` enum and can be configured 
via `atomicityMode` property of `CacheConfiguration`. 
+
+Default atomicity mode is `ATOMIC`.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          \t<!-- 
Set a cache name. -->\n   \t\t\t\t\t<property name=\"name\" 
value=\"myCache\"/>\n\n            <!-- Set atomicity mode, can be ATOMIC or 
TRANSACTIONAL. -->\n    \t\t\t\t<property name=\"atomicityMode\" 
value=\"TRANSACTIONAL\"/>\n            ... \n        </bean\n    
</property>\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/data-grid/web-session-clustering.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/web-session-clustering.md 
b/wiki/documentation/data-grid/web-session-clustering.md
new file mode 100755
index 0000000..a6a575c
--- /dev/null
+++ b/wiki/documentation/data-grid/web-session-clustering.md
@@ -0,0 +1,254 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+Ignite In-Memory Data Fabric is capable of caching web sessions of all Java 
Servlet containers that follow Java Servlet 3.0 Specification, including Apache 
Tomcat, Eclipse Jetty, Oracle WebLogic, and others.
+
+Web sessions caching becomes useful when running a cluster of app servers. 
When running a web application in a servlet container, you may face performance 
and scalability problems. A single app server is usually not able to handle 
large volumes of traffic by itself. A common solution is to scale your web 
application across multiple clustered instances:
+[block:image]
+{
+  "images": [
+    {
+      "image": [
+        "https://www.filepicker.io/api/file/AlvqqQhZRym15ji5iztA";,
+        "web_sessions_1.png",
+        "561",
+        "502",
+        "#7f9eaa",
+        ""
+      ]
+    }
+  ]
+}
+[/block]
+In the architecture shown above, High Availability Proxy (Load Balancer) 
distributes requests between multiple Application Server instances (App Server 
1, App Server 2, ...), reducing the load on each instance and providing service 
availability if any of the instances fails. The problem here is web session 
availability. A web session keeps an intermediate logical state between 
requests by using cookies, and is normally bound to a particular application 
instance. Generally this is handled using sticky connections, ensuring that 
requests from the same user are handled by the same app server instance. 
However, if that instance fails, the session is lost, and the user will have to 
create it anew, loosing all the current unsaved state:
+[block:image]
+{
+  "images": [
+    {
+      "image": [
+        "https://www.filepicker.io/api/file/KtAyyVzrQ5CwhxODgEVV";,
+        "web_sessions_2.png",
+        "561",
+        "502",
+        "#fb7661",
+        ""
+      ]
+    }
+  ]
+}
+[/block]
+A solution here is to use Ignite In-Memory Data Fabric web sessions cache - a 
distributed cache that maintains a copy of each created session, sharing them 
between all instances. If any of your application instances fails, Ignite will 
automatically restore the sessions, owned by the failed instance, from the 
distributed cache regardless of which app server the next request will be 
forwarded to. Moreover, with web session caching sticky connections become less 
important as the session is available on any app server the web request may be 
routed to.
+[block:image]
+{
+  "images": [
+    {
+      "image": [
+        "https://www.filepicker.io/api/file/8WyBbutWSm4PRYDNWRr7";,
+        "web_sessions_3.png",
+        "561",
+        "502",
+        "#f73239",
+        ""
+      ]
+    }
+  ]
+}
+[/block]
+In this chapter we give a brief architecture overview of Ignite's web session 
caching functionality and instructions on how to configure your web application 
to enable web sessions caching.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Architecture"
+}
+[/block]
+To set up a distributed web sessions cache with Ignite, you normally configure 
your web application to start a Ignite node (embedded mode). When multiple 
application server instances are started, all Ignite nodes connect with 
each-other forming a distributed cache.
+[block:callout]
+{
+  "type": "info",
+  "body": "Note that not every Ignite caching node has to be running inside of 
application server. You can also start additional, standalone Ignite nodes and 
add them to the topology as well."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Replication Strategies"
+}
+[/block]
+There are several replication strategies you can use when storing sessions in 
Ignite In-Memory Data Fabric. The replication strategy is defined by the 
backing cache settings. In this section we briefly cover most common 
configurations.
+
+##Fully Replicated Cache
+This strategy stores copies of all sessions on each Ignite node, providing 
maximum availability. However with this approach you can only cache as many web 
sessions as can fit in memory on a single server. Additionally, the performance 
may suffer as every change of web session state now must be replicated to all 
other cluster nodes.
+
+To enable fully replicated strategy, set cacheMode of your backing cache to 
`REPLICATED`:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n    <!-- Cache 
mode. -->\n    <property name=\"cacheMode\" value=\"REPLICATED\"/>\n    
...\n</bean>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+##Partitioned Cache with Backups
+In partitioned mode, web sessions are split into partitions and every node is 
responsible for caching only partitions assigned to that node. With this 
approach, the more nodes you have, the more data can be cached. New nodes can 
always be added on the fly to add more memory.
+[block:callout]
+{
+  "type": "info",
+  "body": "With `Partitioned` mode, redundancy is addressed by configuring 
number of backups for every web session being cached."
+}
+[/block]
+To enable partitioned strategy, set cacheMode of your backing cache to 
`PARTITIONED`, and set the number of backups with `backups` property of 
`CacheConfiguration`:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n    <!-- Cache 
mode. -->\n    <property name=\"cacheMode\" value=\"PARTITIONED\"/>\n    
<property name=\"backups\" value=\"1\"/>\n</bean>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "info",
+  "body": "See [Cache Distribution Models](doc:cache-distribution-models) for 
more information on different replication strategies available in Ignite."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Expiration and Eviction"
+}
+[/block]
+Stale sessions are cleaned up from cache automatically when they expire. 
However, if there are a lot of long-living sessions created, you may want to 
save memory by evicting dispensable sessions from cache when cache reaches a 
certain limit. This can be done by setting up cache eviction policy and 
specifying the maximum number of sessions to be stored in cache. For example, 
to enable automatic eviction with LRU algorithm and a limit of 10000 sessions, 
you will need to use the following cache configuration:
+
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n    <!-- Cache 
name. -->\n    <property name=\"name\" value=\"session-cache\"/>\n \n    <!-- 
Set up LRU eviction policy with 10000 sessions limit. -->\n    <property 
name=\"evictionPolicy\">\n        <bean 
class=\"org.apache.ignite.cache.eviction.lru.CacheLruEvictionPolicy\">\n        
    <property name=\"maxSize\" value=\"10000\"/>\n        </bean>\n    
</property>\n    ...\n</bean>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "info",
+  "body": "For more information about various eviction policies, see Eviction 
Policies section."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+To enable web session caching in your application with Ignite, you need to:
+
+1\. **Add Ignite JARs** - Download Ignite and add the following jars to your 
application’s classpath (`WEB_INF/libs` folder):
+  * `ignite.jar`
+  * `ignite-web.jar`
+  * `ignite-log4j.jar`
+  * `ignite-spring.jar`
+
+Or, if you have a Maven based project, add the following to your application's 
pom.xml.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<dependency>\n      <groupId>org.ignite</groupId>\n      
<artifactId>ignite-fabric</artifactId>\n      <version> 
${ignite.version}</version>\n      
<type>pom</type>\n</dependency>\n\n<dependency>\n    
<groupId>org.ignite</groupId>\n    <artifactId>ignite-web</artifactId>\n    
<version> ${ignite.version}</version>\n</dependency>\n\n<dependency>\n    
<groupId>org.ignite</groupId>\n    <artifactId>ignite-log4j</artifactId>\n    
<version>${ignite.version}</version>\n</dependency>\n\n<dependency>\n    
<groupId>org.ignite</groupId>\n    <artifactId>ignite-spring</artifactId>\n    
<version>${ignite.version}</version>\n</dependency>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+Make sure to replace ${ignite.version} with actual Ignite version.
+
+2\. **Configure Cache Mode** - Configure Ignite cache in either `PARTITIONED` 
or `REPLICATED` mode (See [examples](#replication-strategies) above).
+
+3\. **Update `web.xml`** - Declare a context listener and web session filter 
in `web.xml`:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "...\n\n<listener>\n   
<listener-class>org.apache.ignite.startup.servlet.IgniteServletContextListenerStartup</listener-class>\n</listener>\n\n<filter>\n
   <filter-name>IgniteWebSessionsFilter</filter-name>\n   
<filter-class>org.apache.ignite.cache.websession.IgniteWebSessionFilter</filter-class>\n</filter>\n\n<!--
 You can also specify a custom URL pattern. -->\n<filter-mapping>\n   
<filter-name>IgniteWebSessionsFilter</filter-name>\n   
<url-pattern>/*</url-pattern>\n</filter-mapping>\n\n<!-- Specify Ignite 
configuration (relative to META-INF folder or Ignite_HOME). 
-->\n<context-param>\n   <param-name>IgniteConfigurationFilePath</param-name>\n 
  <param-value>config/default-config.xml 
</param-value>\n</context-param>\n\n<!-- Specify the name of Ignite cache for 
web sessions. -->\n<context-param>\n   
<param-name>IgniteWebSessionsCacheName</param-name>\n   
<param-value>partitioned</param-value>\n</context-param>\n\n...",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+On application start, the listener will start a Ignite node within your 
application, which will connect to other nodes in the network, forming a 
distributed cache.
+
+4\. **Set Eviction Policy (Optional)** - Set eviction policy for stale web 
sessions data lying in cache (See [example](#expiration-and-eviction) above).
+
+##Configuration Parameters
+`IgniteServletContextListenerStartup` has the following configuration 
parameters:
+[block:parameters]
+{
+  "data": {
+    "0-0": "`IgniteConfigurationFilePath`",
+    "0-1": "Path to Ignite configuration file (relative to `META_INF` folder 
or `IGNITE_HOME`).",
+    "0-2": "`/config/default-config.xml`",
+    "h-2": "Default",
+    "h-1": "Description",
+    "h-0": "Parameter Name"
+  },
+  "cols": 3,
+  "rows": 1
+}
+[/block]
+`IgniteWebSessionFilter` has the following configuration parameters:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Parameter Name",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-0": "`IgniteWebSessionsGridName`",
+    "0-1": "Grid name for a started Ignite node. Should refer to grid in 
configuration file (if a grid name is specified in configuration).",
+    "0-2": "null",
+    "1-0": "`IgniteWebSessionsCacheName`",
+    "2-0": "`IgniteWebSessionsMaximumRetriesOnFail`",
+    "1-1": "Name of Ignite cache to use for web sessions caching.",
+    "1-2": "null",
+    "2-1": "Valid only for `ATOMIC` caches. Specifies number of retries in 
case of primary node failures.",
+    "2-2": "3"
+  },
+  "cols": 3,
+  "rows": 3
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Supported Containers"
+}
+[/block]
+Ignite has been officially tested with following servlet containers:
+  * Apache Tomcat 7
+  * Eclipse Jetty 9
+  * Apache Tomcat 6
+  * Oracle WebLogic >= 10.3.4
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/distributed-data-structures/atomic-types.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/distributed-data-structures/atomic-types.md 
b/wiki/documentation/distributed-data-structures/atomic-types.md
new file mode 100755
index 0000000..1043f19
--- /dev/null
+++ b/wiki/documentation/distributed-data-structures/atomic-types.md
@@ -0,0 +1,115 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+Ignite supports distributed ***atomic long*** and ***atomic reference*** , 
similar to `java.util.concurrent.atomic.AtomicLong` and 
`java.util.concurrent.atomic.AtomicReference` respectively. 
+
+Atomics in Ignite are distributed across the cluster, essentially enabling 
performing atomic operations (such as increment-and-get or compare-and-set) 
with the same globally-visible value. For example, you could update the value 
of an atomic long on one node and read it from another node.
+
+##Features
+  * Retrieve current value.
+  * Atomically modify current value.
+  * Atomically increment or decrement current value.
+  * Atomically compare-and-set the current value to new value.
+
+Distributed atomic long and atomic reference can be obtained via 
`IgniteAtomicLong` and `IgniteAtomicReference` interfaces respectively, as 
shown below:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n \nIgniteAtomicLong 
atomicLong = ignite.atomicLong(\n    \"atomicName\", // Atomic long name.\n    
0,        \t\t// Initial value.\n    false     \t\t// Create if it does not 
exist.\n)",
+      "language": "java",
+      "name": "AtomicLong"
+    },
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Create an 
AtomicReference.\nIgniteAtomicReference<Boolean> ref = 
ignite.atomicReference(\n    \"refName\",  // Reference name.\n    \"someVal\", 
 // Initial value for atomic reference.\n    true        // Create if it does 
not exist.\n);",
+      "language": "java",
+      "name": "AtomicReference"
+    }
+  ]
+}
+[/block]
+
+Below is a usage example of `IgniteAtomicLong` and `IgniteAtomicReference`:
+
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Initialize atomic 
long.\nfinal IgniteAtomicLong atomicLong = ignite.atomicLong(\"atomicName\", 0, 
true);\n\n// Increment atomic long on local 
node.\nSystem.out.println(\"Incremented value: \" + 
atomicLong.incrementAndGet());\n",
+      "language": "java",
+      "name": "AtomicLong"
+    },
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Initialize atomic 
reference.\nIgniteAtomicReference<String> ref = 
ignite.atomicReference(\"refName\", \"someVal\", true);\n\n// Compare old value 
to new value and if they are equal,\n//only then set the old value to new 
value.\nref.compareAndSet(\"WRONG EXPECTED VALUE\", \"someNewVal\"); // Won't 
change.",
+      "language": "java",
+      "name": "AtomicReference"
+    }
+  ]
+}
+[/block]
+All atomic operations provided by `IgniteAtomicLong` and 
`IgniteAtomicReference` are synchronous. The time an atomic operation will take 
depends on the number of nodes performing concurrent operations with the same 
instance of atomic long, the intensity of these operations, and network latency.
+[block:callout]
+{
+  "type": "info",
+  "title": "",
+  "body": "`IgniteCache` interface has `putIfAbsent()` and `replace()` 
methods, which provide the same CAS functionality as atomic types."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Atomic Configuration"
+}
+[/block]
+Atomics in Ignite can be configured via `atomicConfiguration` property of 
`IgniteConfiguration`. The following configuration parameters can be used :
+[block:parameters]
+{
+  "data": {
+    "0-0": "`setBackups(int)`",
+    "1-0": "`setCacheMode(CacheMode)`",
+    "2-0": "`setAtomicSequenceReserveSize(int)`",
+    "h-0": "Setter Method",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-1": "Set number of backups.",
+    "0-2": "0",
+    "1-1": "Set cache mode for all atomic types.",
+    "1-2": "`PARTITIONED`",
+    "2-1": "Sets the number of sequence values reserved for 
`IgniteAtomicSequence` instances.",
+    "2-2": "1000"
+  },
+  "cols": 3,
+  "rows": 3
+}
+[/block]
+##Example 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    
<property name=\"atomicConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.AtomicConfiguration\">\n            
<!-- Set number of backups. -->\n            <property name=\"backups\" 
value=\"1\"/>\n          \t\n          \t<!-- Set number of sequence values to 
be reserved. -->\n          \t<property name=\"atomicSequenceReserveSize\" 
value=\"5000\"/>\n        </bean>\n    </property>\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "AtomicConfiguration atomicCfg = new AtomicConfiguration();\n 
\n// Set number of backups.\natomicCfg.setBackups(1);\n\n// Set number of 
sequence values to be reserved. 
\natomicCfg.setAtomicSequenceReserveSize(5000);\n\nIgniteConfiguration cfg = 
new IgniteConfiguration();\n  \n// Use atomic configuration in Ignite 
configuration.\ncfg.setAtomicConfiguration(atomicCfg);\n  \n// Start Ignite 
node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/distributed-data-structures/countdownlatch.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/distributed-data-structures/countdownlatch.md 
b/wiki/documentation/distributed-data-structures/countdownlatch.md
new file mode 100755
index 0000000..68b2eda
--- /dev/null
+++ b/wiki/documentation/distributed-data-structures/countdownlatch.md
@@ -0,0 +1,42 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+If you are familiar with `java.util.concurrent.CountDownLatch` for 
synchronization between threads within a single JVM, Ignite provides 
`IgniteCountDownLatch` to allow similar behavior across cluster nodes. 
+
+A distributed CountDownLatch in Ignite can be created as follows:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteCountDownLatch 
latch = ignite.countDownLatch(\n    \"latchName\", // Latch name.\n    10,      
  \t // Initial count.\n    false        // Auto remove, when counter has 
reached zero.\n    true         // Create if it does not exist.\n);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+After the above code is executed, all nodes in the specified cache will be 
able to synchronize on the latch named - `latchName`. Below is an example of 
such synchronization:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nfinal 
IgniteCountDownLatch latch = ignite.countDownLatch(\"latchName\", 10, false, 
true);\n\n// Execute jobs.\nfor (int i = 0; i < 10; i++)\n    // Execute a job 
on some remote cluster node.\n    ignite.compute().run(() -> {\n        int 
newCnt = latch.countDown();\n\n        System.out.println(\"Counted down: 
newCnt=\" + newCnt);\n    });\n\n// Wait for all jobs to 
complete.\nlatch.await();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/distributed-data-structures/id-generator.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/distributed-data-structures/id-generator.md 
b/wiki/documentation/distributed-data-structures/id-generator.md
new file mode 100755
index 0000000..3ba7b4f
--- /dev/null
+++ b/wiki/documentation/distributed-data-structures/id-generator.md
@@ -0,0 +1,58 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+Distributed atomic sequence provided by `IgniteCacheAtomicSequence`  interface 
is similar to distributed atomic long, but its value can only go up. It also 
supports reserving a range of values to avoid costly network trips or cache 
updates every time a sequence must provide a next value. That is, when you 
perform `incrementAndGet()` (or any other atomic operation) on an atomic 
sequence, the data structure reserves ahead a range of values, which are 
guaranteed to be unique across the cluster for this sequence instance. 
+
+Here is an example of how atomic sequence can be created:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n \nIgniteAtomicSequence seq 
= ignite.atomicSequence(\n    \"seqName\", // Sequence name.\n    0,       // 
Initial value for sequence.\n    true     // Create if it does not exist.\n);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+Below is a simple usage example:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Initialize atomic 
sequence.\nfinal IgniteAtomicSequence seq = ignite.atomicSequence(\"seqName\", 
0, true);\n\n// Increment atomic sequence.\nfor (int i = 0; i < 20; i++) {\n  
long currentValue = seq.get();\n  long newValue = seq.incrementAndGet();\n  \n  
...\n}",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Sequence Reserve Size"
+}
+[/block]
+The key parameter of `IgniteAtomicSequence` is `atomicSequenceReserveSize` 
which is the number of sequence values reserved, per node .  When a node tries 
to obtain an instance of `IgniteAtomicSequence`, a number of sequence values 
will be reserved for that node and consequent increments of sequence will 
happen locally without communication with other nodes, until the next 
reservation has to be made. 
+
+The default value for `atomicSequenceReserveSize` is `1000`. This default 
setting can be changed by modifying the `atomicSequenceReserveSize` property of 
`AtomicConfiguration`. 
+[block:callout]
+{
+  "type": "info",
+  "body": "Refer to [Atomic 
Configuration](/docs/atomic-types#atomic-configuration) for more information on 
various atomic configuration properties, and examples on how to configure them."
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/distributed-data-structures/queue-and-set.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/distributed-data-structures/queue-and-set.md 
b/wiki/documentation/distributed-data-structures/queue-and-set.md
new file mode 100755
index 0000000..b7ca419
--- /dev/null
+++ b/wiki/documentation/distributed-data-structures/queue-and-set.md
@@ -0,0 +1,134 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+Ignite In-Memory Data Fabric, in addition to providing standard key-value 
map-like storage, also provides an implementation of a fast Distributed 
Blocking Queue and Distributed Set.
+
+`IgniteQueue` and `IgniteSet`, an implementation of 
`java.util.concurrent.BlockingQueue` and `java.util.Set` interface 
respectively,  also support all operations from `java.util.Collection` 
interface. Both, queue and set can be created in either collocated or 
non-collocated mode.
+
+Below is an example of how to create a distributed queue and set.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteQueue<String> queue 
= ignite.queue(\n    \"queueName\", // Queue name.\n    0,          // Queue 
capacity. 0 for unbounded queue.\n    null         // Collection 
configuration.\n);",
+      "language": "java",
+      "name": "Queue"
+    },
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteSet<String> set = 
ignite.set(\n    \"setName\", // Queue name.\n    null       // Collection 
configuration.\n);",
+      "language": "java",
+      "name": "Set"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Collocated vs. Non-Collocated Mode"
+}
+[/block]
+If you plan to create just a few queues or sets containing lots of data, then 
you would create them in non-collocated mode. This will make sure that about 
equal portion of each queue or set will be stored on each cluster node. On the 
other hand, if you plan to have many queues or sets, relatively small in size 
(compared to the whole cache), then you would most likely create them in 
collocated mode. In this mode all queue or set elements will be stored on the 
same cluster node, but about equal amount of queues/sets will be assigned to 
every node.
+A collocated queue and set can be created by setting the `collocated` property 
of `CollectionConfiguration`, like so:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nCollectionConfiguration 
colCfg = new CollectionConfiguration();\n\ncolCfg.setCollocated(true); \n\n// 
Create collocated queue.\nIgniteQueue<String> queue = 
ignite.queue(\"queueName\", 0, colCfg);\n\n// Create collocated 
set.\nIgniteSet<String> set = ignite.set(\"setName\", colCfg);",
+      "language": "java",
+      "name": "Queue"
+    },
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nCollectionConfiguration 
colCfg = new CollectionConfiguration();\n\ncolCfg.setCollocated(true); \n\n// 
Create collocated set.\nIgniteSet<String> set = ignite.set(\"setName\", 
colCfg);",
+      "language": "text",
+      "name": "Set"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "info",
+  "title": "",
+  "body": "Non-collocated mode only makes sense for and is only supported for 
`PARTITIONED` caches."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Bounded Queues"
+}
+[/block]
+Bounded queues allow users to have many queues with maximum size which gives a 
better control over the overall cache capacity. They can be either *collocated* 
or *non-collocated*. When bounded queues are relatively small and used in 
collocated mode, all queue operations become extremely fast. Moreover, when 
used in combination with compute grid, users can collocate their compute jobs 
with cluster nodes on which queues are located to make sure that all operations 
are local and there is none (or minimal) data distribution. 
+
+Here is an example of how a job could be send directly to the node on which a 
queue resides:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nCollectionConfiguration 
colCfg = new CollectionConfiguration();\n\ncolCfg.setCollocated(true); 
\n\ncolCfg.setCacheName(\"cacheName\");\n\nfinal IgniteQueue<String> queue = 
ignite.queue(\"queueName\", 20, colCfg);\n \n// Add queue elements (queue is 
cached on some node).\nfor (int i = 0; i < 20; i++)\n    queue.add(\"Value \" + 
Integer.toString(i));\n \nIgniteRunnable queuePoller = new IgniteRunnable() {\n 
   @Override public void run() throws IgniteException {\n        // Poll is 
local operation due to collocation.\n        for (int i = 0; i < 20; i++)\n     
       System.out.println(\"Polled element: \" + queue.poll());\n    
}\n};\n\n// Drain queue on the node where the queue is 
cached.\nignite.compute().affinityRun(\"cacheName\", \"queueName\", 
queuePoller);",
+      "language": "java",
+      "name": "Queue"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "success",
+  "body": "Refer to [Collocate Compute and 
Data](doc:collocate-compute-and-data) section for more information on 
collocating computations with data."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Cache Queues and Load Balancing"
+}
+[/block]
+Given that elements will remain in the queue until someone takes them, and 
that no two nodes should ever receive the same element from the queue, cache 
queues can be used as an alternate work distribution and load balancing 
approach within Ignite. 
+
+For example, you could simply add computations, such as instances of 
`IgniteRunnable` to a queue, and have threads on remote nodes call 
`IgniteQueue.take()`  method which will block if queue is empty. Once the 
`take()` method will return a job, a thread will process it and call `take()` 
again to get the next job. Given this approach, threads on remote nodes will 
only start working on the next job when they have completed the previous one, 
hence creating ideally balanced system where every node only takes the number 
of jobs it can process, and not more.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Collection Configuration"
+}
+[/block]
+Ignite collections can be in configured in API via `CollectionConfiguration` 
(see above examples). The following configuration parameters can be used:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Setter Method",
+    "0-0": "`setCollocated(boolean)`",
+    "1-0": "`setCacheName(String)`",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-2": "false",
+    "0-1": "Sets collocation mode.",
+    "1-1": "Set name of the cache to store this collection.",
+    "1-2": "null"
+  },
+  "cols": 3,
+  "rows": 2
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/distributed-events/automatic-batching.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/distributed-events/automatic-batching.md 
b/wiki/documentation/distributed-events/automatic-batching.md
new file mode 100755
index 0000000..ed124a4
--- /dev/null
+++ b/wiki/documentation/distributed-events/automatic-batching.md
@@ -0,0 +1,34 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+Ignite automatically groups or batches event notifications that are generated 
as a result of cache events occurring within the cluster.
+
+Each activity in cache can result in an event notification being generated and 
sent. For systems where cache activity is high, getting notified for every 
event could be network intensive, possibly leading to a decreased performance 
of cache operations in the grid.
+
+In Ignite, event notifications can be grouped together and sent in batches or 
timely intervals. Here is an example of how this can be done:
+
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n \n// Get an instance of 
named cache.\nfinal IgniteCache<Integer, String> cache = 
ignite.jcache(\"cacheName\");\n \n// Sample remote filter which only accepts 
events for keys\n// that are greater than or equal to 
10.\nIgnitePredicate<CacheEvent> rmtLsnr = new IgnitePredicate<CacheEvent>() 
{\n    @Override public boolean apply(CacheEvent evt) {\n        
System.out.println(\"Cache event: \" + evt);\n \n        int key = evt.key();\n 
\n        return key >= 10;\n    }\n};\n \n// Subscribe to cache events 
occuring on all nodes \n// that have the specified cache running. \n// Send 
notifications in batches of 
10.\nignite.events(ignite.cluster().forCacheNodes(\"cacheName\")).remoteListen(\n\t\t10
 /*batch size*/, 0 /*time intervals*/, false, null, rmtLsnr, EVTS_CACHE);\n 
\n// Generate cache events.\nfor (int i = 0; i < 20; i++)\n    cache.put(i, 
Integer.toString(i));",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/distributed-events/events.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/distributed-events/events.md 
b/wiki/documentation/distributed-events/events.md
new file mode 100755
index 0000000..eb43fee
--- /dev/null
+++ b/wiki/documentation/distributed-events/events.md
@@ -0,0 +1,119 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+Ignite distributed events functionality allows applications to receive 
notifications when a variety of events occur in the distributed grid 
environment. You can automatically get notified for task executions, read, 
write or query operations occurring on local or remote nodes within the cluster.
+
+Distributed events functionality is provided via `IgniteEvents` interface. You 
can get an instance of `IgniteEvents` from Ignite as follows:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteEvents evts = 
ignite.events();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Subscribe for Events"
+}
+[/block]
+Listen methods can be used to receive notification for specified events 
happening in the cluster. These methods register a listener on local or remotes 
nodes for the specified events. Whenever the event occurs on the node, the 
listener is notified. 
+##Local Events
+`localListen(...)`  method registers event listeners with specified events on 
local node only.
+##Remote Events
+`remoteListen(...)` method registers event listeners with specified events on 
all nodes within the cluster or cluster group. Following is an example of each 
method:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Local listener that 
listenes to local events.\nIgnitePredicate<CacheEvent> locLsnr = evt -> {\n  
System.out.println(\"Received event [evt=\" + evt.name() + \", key=\" + 
evt.key() + \n    \", oldVal=\" + evt.oldValue() + \", newVal=\" + 
evt.newValue());\n\n  return true; // Continue listening.\n};\n\n// Subscribe 
to specified cache events occuring on local 
node.\nignite.events().localListen(locLsnr,\n  
EventType.EVT_CACHE_OBJECT_PUT,\n  EventType.EVT_CACHE_OBJECT_READ,\n  
EventType.EVT_CACHE_OBJECT_REMOVED);\n\n// Get an instance of named 
cache.\nfinal IgniteCache<Integer, String> cache = 
ignite.jcache(\"cacheName\");\n\n// Generate cache events.\nfor (int i = 0; i < 
20; i++)\n  cache.put(i, Integer.toString(i));\n",
+      "language": "java",
+      "name": "local listen"
+    },
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Get an instance of 
named cache.\nfinal IgniteCache<Integer, String> cache = 
ignite.jcache(\"cacheName\");\n\n// Sample remote filter which only accepts 
events for keys\n// that are greater than or equal to 
10.\nIgnitePredicate<CacheEvent> rmtLsnr = evt -> evt.<Integer>key() >= 
10;\n\n// Subscribe to specified cache events on all nodes that have cache 
running.\nignite.events(ignite.cluster().forCacheNodes(\"cacheName\")).remoteListen(null,
 rmtLsnr,                                                                 
EventType.EVT_CACHE_OBJECT_PUT,\n  EventType.EVT_CACHE_OBJECT_READ,\n  
EventType.EVT_CACHE_OBJECT_REMOVED);\n\n// Generate cache events.\nfor (int i = 
0; i < 20; i++)\n  cache.put(i, Integer.toString(i));\n",
+      "language": "java",
+      "name": "remote listen"
+    },
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n \n// Get an instance of 
named cache.\nfinal IgniteCache<Integer, String> cache = 
ignite.jcache(\"cacheName\");\n \n// Sample remote filter which only accepts 
events for keys\n// that are greater than or equal to 
10.\nIgnitePredicate<CacheEvent> rmtLsnr = new IgnitePredicate<CacheEvent>() 
{\n    @Override public boolean apply(CacheEvent evt) {\n        
System.out.println(\"Cache event: \" + evt);\n \n        int key = evt.key();\n 
\n        return key >= 10;\n    }\n};\n \n// Subscribe to specified cache 
events occuring on \n// all nodes that have the specified cache 
running.\nignite.events(ignite.cluster().forCacheNodes(\"cacheName\")).remoteListen(null,
 rmtLsnr,                                                                 
EVT_CACHE_OBJECT_PUT,                                      \t\t    \t\t   
EVT_CACHE_OBJECT_READ,                                                     
EVT_CACHE_OBJECT_REMOVED);\n \n// Generate cache events.\nfo
 r (int i = 0; i < 20; i++)\n    cache.put(i, Integer.toString(i));",
+      "language": "java",
+      "name": "java7 listen"
+    }
+  ]
+}
+[/block]
+In the above example `EVT_CACHE_OBJECT_PUT`,`EVT_CACHE_OBJECT_READ`, and 
`EVT_CACHE_OBJECT_REMOVED` are pre-defined event type constants defined in 
`EventType` interface.  
+[block:callout]
+{
+  "type": "info",
+  "body": "`EventType` interface defines various event type constants that can 
be used with listen methods. Refer to 
[javadoc](https://ignite.incubator.apache.org/releases/1.0.0/javadoc/) for 
complete list of these event types."
+}
+[/block]
+
+[block:callout]
+{
+  "type": "warning",
+  "body": "Event types passed in as parameter in  `localListen(...)` and 
`remoteListen(...)` methods must also be configured in `IgniteConfiguration`. 
See [configuration](#configuration) example below."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Query for Events"
+}
+[/block]
+All events generated in the system are kept locally on the local node. 
`IgniteEvents` API provides methods to query for these events.
+##Local Events
+`localQuery(...)`  method queries for events on the local node using the 
passed in predicate filter. If all predicates are satisfied, a collection of 
events happening on the local node is returned.
+##Remote Events
+`remoteQuery(...)`  method asynchronously queries for events on remote nodes 
in this projection using the passed in predicate filter. This operation is 
distributed and hence can fail on communication layer and generally can take 
much longer than local event notifications. Note that this method will not 
block and will return immediately with future.
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+To get notified of any tasks or cache events occurring within the cluster, 
`includeEventTypes` property of `IgniteConfiguration` must be enabled.  
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n \t\t... \n    
<!-- Enable cache events. -->\n    <property name=\"includeEventTypes\">\n      
  <util:constant 
static-field=\"org.apache.ignite.events.EventType.EVTS_CACHE\"/>\n    
</property>\n  \t...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n\n// 
Enable cache events.\ncfg.setIncludeEventTypes(EVTS_CACHE);\n\n// Start Ignite 
node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+By default, event notifications are turned off for performance reasons.
+[block:callout]
+{
+  "type": "success",
+  "body": "Since thousands of events per second are generated, it creates an 
additional load on the system. This can lead to significant performance 
degradation. Therefore, it is highly recommended to enable only those events 
that your application logic requires."
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/distributed-file-system/igfs.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/distributed-file-system/igfs.md 
b/wiki/documentation/distributed-file-system/igfs.md
new file mode 100755
index 0000000..fcefcef
--- /dev/null
+++ b/wiki/documentation/distributed-file-system/igfs.md
@@ -0,0 +1,19 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+We are currently adding documentation for the Ignite File System. In the mean 
time you can refer to the 
[javadoc](https://ignite.incubator.apache.org/releases/1.0.0/javadoc/org/apache/ignite/IgniteFs.html).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/wiki/documentation/distributed-messaging/messaging.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/distributed-messaging/messaging.md 
b/wiki/documentation/distributed-messaging/messaging.md
new file mode 100755
index 0000000..f526b45
--- /dev/null
+++ b/wiki/documentation/distributed-messaging/messaging.md
@@ -0,0 +1,91 @@
+<!--
+  Licensed to the Apache Software Foundation (ASF) under one or more
+  contributor license agreements.  See the NOTICE file distributed with
+  this work for additional information regarding copyright ownership.
+  The ASF licenses this file to You under the Apache License, Version 2.0
+  (the "License"); you may not use this file except in compliance with
+  the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License.
+-->
+
+
+Ignite distributed messaging allows for topic based cluster-wide communication 
between all nodes. Messages with a specified message topic can be distributed 
to all or sub-group of nodes that have subscribed to that topic. 
+
+Ignite messaging is based on publish-subscribe paradigm where publishers and 
subscribers  are connected together by a common topic. When one of the nodes 
sends a message A for topic T, it is published on all nodes that have 
subscribed to T.
+[block:callout]
+{
+  "type": "info",
+  "body": "Any new node joining the cluster automatically gets subscribed to 
all the topics that other nodes in the cluster (or [cluster 
group](/docs/cluster-groups)) are subscribed to."
+}
+[/block]
+Distributed Messaging functionality in Ignite is provided via 
`IgniteMessaging` interface. You can get an instance of `IgniteMessaging`, like 
so:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Messaging instance 
over this cluster.\nIgniteMessaging msg = ignite.message();\n\n// Messaging 
instance over given cluster group (in this case, remote 
nodes).\nIgniteMessaging rmtMsg = 
ignite.message(ignite.cluster().forRemotes());",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Publish Messages"
+}
+[/block]
+Send methods help sending/publishing messages with a specified message topic 
to all nodes. Messages can be sent in *ordered* or *unordered* manner.
+##Ordered Messages
+`sendOrdered(...)` method can be used if you want to receive messages in the 
order they were sent. A timeout parameter is passed to specify how long a 
message will stay in the queue to wait for messages that are supposed to be 
sent before this message. If the timeout expires, then all the messages that 
have not yet arrived for a given topic on that node will be ignored.
+##Unordered Messages
+`send(...)` methods do not guarantee message ordering. This means that, when 
you sequentially send message A and message B, you are not guaranteed that the 
target node first receives A and then B.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Subscribe for Messages"
+}
+[/block]
+Listen methods help to listen/subscribe for messages. When these methods are 
called, a listener with specified message topic is registered on  all (or 
sub-group of ) nodes to listen for new messages. With listen methods, a 
predicate is passed that returns a boolean value which tells the listener to 
continue or stop listening for new messages. 
+##Local Listen
+`localListen(...)` method registers a message listener with specified topic 
only on the local node and listens for messages from any node in *this* cluster 
group.
+##Remote Listen
+`remoteListen(...)` method registers message listeners with specified topic on 
all nodes in *this* cluster group and listens for messages from any node in 
*this* cluster group . 
+
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Example"
+}
+[/block]
+Following example shows message exchange between remote nodes.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n \nIgniteMessaging rmtMsg = 
ignite.message(ignite.cluster().forRemotes());\n \n// Add listener for 
unordered messages on all remote 
nodes.\nrmtMsg.remoteListen(\"MyOrderedTopic\", (nodeId, msg) -> {\n    
System.out.println(\"Received ordered message [msg=\" + msg + \", from=\" + 
nodeId + ']');\n \n    return true; // Return true to continue 
listening.\n});\n \n// Send ordered messages to remote nodes.\nfor (int i = 0; 
i < 10; i++)\n    rmtMsg.sendOrdered(\"MyOrderedTopic\", Integer.toString(i));",
+      "language": "java",
+      "name": "Ordered Messaging"
+    },
+    {
+      "code": " Ignite ignite = Ignition.ignite();\n \nIgniteMessaging rmtMsg 
= ignite.message(ignite.cluster().forRemotes());\n \n// Add listener for 
unordered messages on all remote 
nodes.\nrmtMsg.remoteListen(\"MyUnOrderedTopic\", (nodeId, msg) -> {\n    
System.out.println(\"Received unordered message [msg=\" + msg + \", from=\" + 
nodeId + ']');\n \n    return true; // Return true to continue 
listening.\n});\n \n// Send unordered messages to remote nodes.\nfor (int i = 
0; i < 10; i++)\n    rmtMsg.send(\"MyUnOrderedTopic\", Integer.toString(i));",
+      "language": "java",
+      "name": "Unordered Messaging"
+    },
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Get cluster group of 
remote nodes.\nClusterGroup rmtPrj = ignite.cluster().forRemotes();\n\n// Get 
messaging instance over remote nodes.\nIgniteMessaging msg = 
ignite.message(rmtPrj);\n\n// Add message listener for specified topic on all 
remote nodes.\nmsg.remoteListen(\"myOrderedTopic\", new IgniteBiPredicate<UUID, 
String>() {\n    @Override public boolean apply(UUID nodeId, String msg) {\n    
    System.out.println(\"Received ordered message [msg=\" + msg + \", from=\" + 
nodeId + ']');\n\n        return true; // Return true to continue listening.\n  
  }\n});\n\n// Send ordered messages to all remote nodes.\nfor (int i = 0; i < 
10; i++)\n    msg.sendOrdered(\"myOrderedTopic\", Integer.toString(i), 0);",
+      "language": "java",
+      "name": "java7 ordered"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

Reply via email to