http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/data-grid/data-loading.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/data-loading.md 
b/docs/wiki/data-grid/data-loading.md
new file mode 100755
index 0000000..1bd43ac
--- /dev/null
+++ b/docs/wiki/data-grid/data-loading.md
@@ -0,0 +1,77 @@
+Data loading usually has to do with initializing cache data on startup. Using 
standard cache `put(...)` or `putAll(...)` operations is generally inefficient 
for loading large amounts of data. 
+[block:api-header]
+{
+  "type": "basic",
+  "title": "IgniteDataLoader"
+}
+[/block]
+For fast loading of large amounts of data Ignite provides a utility interface, 
`IgniteDataLoader`, which internally will properly batch keys together and 
collocate those batches with nodes on which the data will be cached. 
+
+The high loading speed is achieved with the following techniques:
+  * Entries that are mapped to the same cluster member will be batched 
together in a buffer.
+  * Multiple buffers can coexist at the same time.
+  * To avoid running out of memory, data loader has a maximum number of 
buffers it can process concurrently.
+
+To add data to the data loader, you should call 
`IgniteDataLoader.addData(...)` method.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "// Get the data loader reference and load data.\ntry 
(IgniteDataLoader<Integer, String> ldr = ignite.dataLoader(\"myCache\")) {    
\n    // Load entries.\n    for (int i = 0; i < 100000; i++)\n        
ldr.addData(i, Integer.toString(i));\n}",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+## Allow Overwrite
+By default, the data loader will only support initial data loading, which 
means that if it will encounter an entry that is already in cache, it will skip 
it. This is the most efficient and performant mode, as the data loader does not 
have to worry about data versioning in the background.
+
+If you anticipate that the data may already be in the cache and you need to 
overwrite it, you should set `IgniteDataLoader.allowOverwrite(true)` parameter.
+
+## Using Updater
+For cases when you need to execute some custom logic instead of just adding 
new data, you can take advantage of `IgniteDataLoader.Updater` API. 
+
+In the example below, we  generate random numbers and store them as key. The 
number of times the same number is generated is stored as value. The `Updater` 
helps to increment the value by 1 each time we try to load that same key into 
the cache.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "// Closure that increments passed in value.\nfinal 
GridClosure<Long, Long> INC = new GridClosure<Long, Long>() {\n    @Override 
public Long apply(Long e) {\n        return e == null ? 1L : e + 1;\n    
}\n};\n\n// Get the data loader reference and load data.\ntry 
(GridDataLoader<Integer, String> ldr = grid.dataLoader(\"myCache\")) {   \n    
// Configure the updater.\n    ldr.updater((cache, entries) -> {\n      for 
(Map.Entry<Integer, Long> e : entries)\n        cache.invoke(e.getKey(), 
(entry, args) -> {\n          Integer val = entry.getValue();\n\n          
entry.setValue(val == null ? 1 : val + 1);\n\n          return null;\n        
});\n    });\n \n    for (int i = 0; i < CNT; i++)\n        
ldr.addData(RAND.nextInt(100), 1L);\n}",
+      "language": "java",
+      "name": "updater"
+    },
+    {
+      "code": "// Closure that increments passed in value.\nfinal 
GridClosure<Long, Long> INC = new GridClosure<Long, Long>() {\n    @Override 
public Long apply(Long e) {\n        return e == null ? 1L : e + 1;\n    
}\n};\n\n// Get the data loader reference and load data.\ntry 
(GridDataLoader<Integer, String> ldr = grid.dataLoader(\"myCache\")) {   \n    
// Configure updater.\n    ldr.updater(new GridDataLoadCacheUpdater<Integer, 
Long>() {\n        @Override public void update(GridCache<Integer, Long> 
cache,\n            Collection<Map.Entry<Integer, Long>> entries) throws 
GridException {\n                for (Map.Entry<Integer, Long> e : entries)\n   
                 cache.transform(e.getKey(), INC);\n        }\n    });\n \n    
for (int i = 0; i < CNT; i++)\n        ldr.addData(RAND.nextInt(100), 1L);\n}",
+      "language": "java",
+      "name": "java7 updater"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "IgniteCache.loadCache()"
+}
+[/block]
+Another way to load large amounts of data into cache is through 
[CacheStore.loadCache()](docs/persistent-store#loadcache-) method, which allows 
for cache data loading even without passing all the keys that need to be 
loaded. 
+
+`IgniteCache.loadCache()` method will delegate to `CacheStore.loadCache()` 
method on every cluster member that is running the cache. To invoke loading 
only on the local cluster node, use `IgniteCache.localLoadCache()` method.
+[block:callout]
+{
+  "type": "info",
+  "body": "In case of partitioned caches, keys that are not mapped to this 
node, either as primary or backups, will be automatically discarded by the 
cache."
+}
+[/block]
+Here is an example of how `CacheStore.loadCache()` implementation. For a 
complete example of how a `CacheStore` can be implemented refer to [Persistent 
Store](doc:persistent-store).
+[block:code]
+{
+  "codes": [
+    {
+      "code": "public class CacheJdbcPersonStore extends 
CacheStoreAdapter<Long, Person> {\n\t...\n  // This mehtod is called whenever 
\"IgniteCache.loadCache()\" or\n  // \"IgniteCache.localLoadCache()\" methods 
are called.\n  @Override public void loadCache(IgniteBiInClosure<Long, Person> 
clo, Object... args) {\n    if (args == null || args.length == 0 || args[0] == 
null)\n      throw new CacheLoaderException(\"Expected entry count parameter is 
not provided.\");\n\n    final int entryCnt = (Integer)args[0];\n\n    
Connection conn = null;\n\n    try (Connection conn = connection()) {\n      
try (PreparedStatement st = conn.prepareStatement(\"select * from PERSONS\")) 
{\n        try (ResultSet rs = st.executeQuery()) {\n          int cnt = 0;\n\n 
         while (cnt < entryCnt && rs.next()) {\n            Person person = new 
Person(rs.getLong(1), rs.getString(2), rs.getString(3));\n\n            
clo.apply(person.getId(), person);\n\n            cnt++;\n          }\n        
}\n      
 }\n    }\n    catch (SQLException e) {\n      throw new 
CacheLoaderException(\"Failed to load values from cache store.\", e);\n    }\n  
}\n  ...\n}",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/data-grid/evictions.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/evictions.md b/docs/wiki/data-grid/evictions.md
new file mode 100755
index 0000000..9ab3d58
--- /dev/null
+++ b/docs/wiki/data-grid/evictions.md
@@ -0,0 +1,86 @@
+Eviction policies control the maximum number of elements that can be stored in 
a cache on-heap memory.  Whenever maximum on-heap cache size is reached, 
entries are evicted into [off-heap space](doc:off-heap-memory), if one is 
enabled. 
+
+In Ignite eviction policies are pluggable and are controlled via 
`CacheEvictionPolicy` interface. An implementation of eviction policy is 
notified of every cache change and defines the algorithm of choosing the 
entries to evict from cache. 
+[block:callout]
+{
+  "type": "info",
+  "body": "If your data set can fit in memory, then eviction policy will not 
provide any benefit and should be disabled, which is the default behavior."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Least Recently Used (LRU)"
+}
+[/block]
+LRU eviction policy is based on [Least Recently Used 
(LRU)](http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) 
algorithm, which ensures that the least recently used entry (i.e. the entry 
that has not been touched the longest) gets evicted first. 
+[block:callout]
+{
+  "type": "success",
+  "body": "LRU eviction policy nicely fits most of the use cases for caching. 
Use it whenever in doubt."
+}
+[/block]
+This eviction policy is implemented by `CacheLruEvictionPolicy` and can be 
configured via `CacheConfiguration`.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.cache.CacheConfiguration\">\n  
<property name=\"name\" value=\"myCache\"/>\n    ...\n    <property 
name=\"evictionPolicy\">\n        <!-- LRU eviction policy. -->\n        <bean 
class=\"org.apache.ignite.cache.eviction.lru.CacheLruEvictionPolicy\">\n        
    <!-- Set the maximum cache size to 1 million (default is 100,000). -->\n    
        <property name=\"maxSize\" value=\"1000000\"/>\n        </bean>\n    
</property>\n    ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\n// Set the maximum 
cache size to 1 million (default is 100,000).\ncacheCfg.setEvictionPolicy(new 
CacheLruEvictionPolicy(1000000);\n\nIgniteConfiguration cfg = new 
IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start 
Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "First In First Out (FIFO)"
+}
+[/block]
+FIFO eviction policy is based on [First-In-First-Out 
(FIFO)](https://en.wikipedia.org/wiki/FIFO) algorithm which ensures that entry 
that has been in cache the longest will be evicted first. It is different from 
`CacheLruEvictionPolicy` because it ignores the access order of entries. 
+
+This eviction policy is implemented by `CacheFifoEvictionPolicy` and can be 
configured via `CacheConfiguration`.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.cache.CacheConfiguration\">\n  
<property name=\"name\" value=\"myCache\"/>\n    ...\n    <property 
name=\"evictionPolicy\">\n        <!-- FIFO eviction policy. -->\n        <bean 
class=\"org.apache.ignite.cache.eviction.fifo.CacheFifoEvictionPolicy\">\n      
      <!-- Set the maximum cache size to 1 million (default is 100,000). -->\n  
          <property name=\"maxSize\" value=\"1000000\"/>\n        </bean>\n    
</property>\n    ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\n// Set the maximum 
cache size to 1 million (default is 100,000).\ncacheCfg.setEvictionPolicy(new 
CacheFifoEvictionPolicy(1000000);\n\nIgniteConfiguration cfg = new 
IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start 
Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Random"
+}
+[/block]
+Random eviction policy which randomly chooses entries to evict. This eviction 
policy is mainly used for debugging and benchmarking purposes.
+
+This eviction policy is implemented by `CacheRandomEvictionPolicy` and can be 
configured via `CacheConfiguration`.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean class=\"org.apache.ignite.cache.CacheConfiguration\">\n  
<property name=\"name\" value=\"myCache\"/>\n    ...\n    <property 
name=\"evictionPolicy\">\n        <!-- Random eviction policy. -->\n        
<bean 
class=\"org.apache.ignite.cache.eviction.random.CacheRandomEvictionPolicy\">\n  
          <!-- Set the maximum cache size to 1 million (default is 100,000). 
-->\n            <property name=\"maxSize\" value=\"1000000\"/>\n        
</bean>\n    </property>\n    ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\n// Set the maximum 
cache size to 1 million (default is 100,000).\ncacheCfg.setEvictionPolicy(new 
CacheRandomEvictionPolicy(1000000);\n\nIgniteConfiguration cfg = new 
IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start 
Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/data-grid/hibernate-l2-cache.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/hibernate-l2-cache.md 
b/docs/wiki/data-grid/hibernate-l2-cache.md
new file mode 100755
index 0000000..56cb0bc
--- /dev/null
+++ b/docs/wiki/data-grid/hibernate-l2-cache.md
@@ -0,0 +1,173 @@
+Ignite In-Memory Data Fabric can be used as [Hibernate](http://hibernate.org) 
Second-Level cache (or L2 cache), which can significantly speed-up the 
persistence layer of your application.
+
+[Hibernate](http://hibernate.org) is a well-known and widely used framework 
for Object-Relational Mapping (ORM). While interacting closely with an SQL 
database, it performs caching of retrieved data to minimize expensive database 
requests.
+[block:image]
+{
+  "images": [
+    {
+      "image": [
+        "https://www.filepicker.io/api/file/D35hL3OuQ46YA4v3BLwJ";,
+        "hibernate-L2-cache.png",
+        "600",
+        "478",
+        "#b7917a",
+        ""
+      ]
+    }
+  ]
+}
+[/block]
+All work with Hibernate database-mapped objects is done within a session, 
usually bound to a worker thread or a Web session. By default, Hibernate only 
uses per-session (L1) cache, so, objects, cached in one session, are not seen 
in another. However, a global second-level (L2) cache may be used, in which the 
cached objects are seen for all sessions that use the same L2 cache 
configuration. This usually gives a significantly greater performance gain, 
because each newly-created session can take full advantage of the data already 
present in L2 cache memory (which outlives any session-local L1 cache).
+
+While L1 cache is always enabled and fully implemented by Hibernate 
internally, L2 cache is optional and can have multiple pluggable 
implementaions. Ignite can be easily plugged-in as an L2 cache implementation, 
and can be used in all access modes (`READ_ONLY`, `READ_WRITE`, 
`NONSTRICT_READ_WRITE`, and `TRANSACTIONAL`), supporting a wide range of 
related features:
+  * caching to memory and disk, as well as off-heap memory.
+  * cache transactions, that make `TRANSACTIONA`L mode possible.
+  * clustering, with 2 different replication modes: `REPLICATED` and 
`PARTITIONED`
+
+To start using GridGain as a Hibernate L2 cache, you need to perform 3 simple 
steps:
+  * Add Ignite libraries to your application's classpath.
+  * Enable L2 cache and specify Ignite implementation class in L2 cache 
configuration.
+  * Configure Ignite caches for L2 cache regions and start the embedded Ignite 
node (and, optionally, external Ignite nodes). 
+ 
+In the section below we cover these steps in more detail.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "L2 Cache Configuration"
+}
+[/block]
+To configure Ignite In-Memory Data Fabric as a Hibernate L2 cache, without any 
changes required to the existing Hibernate code, you need to:
+  * Configure Hibernate itself to use Ignite as L2 cache.
+  * Configure Ignite cache appropriately. 
+
+##Hibernate Configuration Example
+A typical Hibernate configuration for L2 cache with Ignite would look like the 
one below:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<hibernate-configuration>\n    <session-factory>\n        ...\n 
       <!-- Enable L2 cache. -->\n        <property 
name=\"cache.use_second_level_cache\">true</property>\n        \n        <!-- 
Generate L2 cache statistics. -->\n        <property 
name=\"generate_statistics\">true</property>\n        \n        <!-- Specify 
GridGain as L2 cache provider. -->\n        <property 
name=\"cache.region.factory_class\">org.gridgain.grid.cache.hibernate.GridHibernateRegionFactory</property>\n
        \n        <!-- Specify the name of the grid, that will be used for 
second level caching. -->\n        <property 
name=\"org.gridgain.hibernate.grid_name\">hibernate-grid</property>\n        \n 
       <!-- Set default L2 cache access type. -->\n        <property 
name=\"org.gridgain.hibernate.default_access_type\">READ_ONLY</property>\n      
  \n        <!-- Specify the entity classes for mapping. -->\n        <mapping 
class=\"com.mycompany.MyEntity1\"/>\n        <mapping class=\"com.m
 ycompany.MyEntity2\"/>\n        \n        <!-- Per-class L2 cache settings. 
-->\n        <class-cache class=\"com.mycompany.MyEntity1\" 
usage=\"read-only\"/>\n        <class-cache class=\"com.mycompany.MyEntity2\" 
usage=\"read-only\"/>\n        <collection-cache 
collection=\"com.mycompany.MyEntity1.children\" usage=\"read-only\"/>\n        
...\n    </session-factory>\n</hibernate-configuration>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+Here, we do the following:
+  * Enable L2 cache (and, optionally, the L2 cache statistics generation).
+  * Specify Ignite as L2 cache implementation.
+  * Specify the name of the caching grid (should correspond to the one in 
Ignite configuration).
+  * Specify the entity classes and configure caching for each class (a 
corresponding cache region should be configured in Ignite). 
+
+##Ignite Configuration Example
+A typical Ignite configuration for Hibernate L2 caching looks like this:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<!-- Basic configuration for atomic cache. -->\n<bean 
id=\"atomic-cache\" 
class=\"org.apache.ignite.configutation.CacheConfiguration\" 
abstract=\"true\">\n    <property name=\"cacheMode\" value=\"PARTITIONED\"/>\n  
  <property name=\"atomicityMode\" value=\"ATOMIC\"/>\n    <property 
name=\"writeSynchronizationMode\" value=\"FULL_SYNC\"/>\n</bean>\n \n<!-- Basic 
configuration for transactional cache. -->\n<bean id=\"transactional-cache\" 
class=\"org.apache.ignite.configutation.CacheConfiguration\" 
abstract=\"true\">\n    <property name=\"cacheMode\" value=\"PARTITIONED\"/>\n  
  <property name=\"atomicityMode\" value=\"TRANSACTIONAL\"/>\n    <property 
name=\"writeSynchronizationMode\" value=\"FULL_SYNC\"/>\n</bean>\n \n<bean 
id=\"ignite.cfg\" 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    <!-- \n     
   Specify the name of the caching grid (should correspond to the \n        one 
in Hibernate configuration).\n    -->\n    <property name=\"gridName\" 
 value=\"hibernate-grid\"/>\n    ...\n    <!-- \n        Specify cache 
configuration for each L2 cache region (which corresponds \n        to a full 
class name or a full association name).\n    -->\n    <property 
name=\"cacheConfiguration\">\n        <list>\n            <!--\n                
Configurations for entity caches.\n            -->\n            <bean 
parent=\"transactional-cache\">\n                <property name=\"name\" 
value=\"com.mycompany.MyEntity1\"/>\n            </bean>\n            <bean 
parent=\"transactional-cache\">\n                <property name=\"name\" 
value=\"com.mycompany.MyEntity2\"/>\n            </bean>\n            <bean 
parent=\"transactional-cache\">\n                <property name=\"name\" 
value=\"com.mycompany.MyEntity1.children\"/>\n            </bean>\n \n          
  <!-- Configuration for update timestamps cache. -->\n            <bean 
parent=\"atomic-cache\">\n                <property name=\"name\" 
value=\"org.hibernate.cache.spi.UpdateTimesta
 mpsCache\"/>\n            </bean>\n \n            <!-- Configuration for query 
result cache. -->\n            <bean parent=\"atomic-cache\">\n                
<property name=\"name\" 
value=\"org.hibernate.cache.internal.StandardQueryCache\"/>\n            
</bean>\n        </list>\n    </property>\n    ...\n</bean>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+Here, we specify the cache configuration for each L2 cache region:
+  * We use `PARTITIONED` cache to split the data between caching nodes. 
Another possible strategy is to enable `REPLICATED` mode, thus replicating a 
full dataset between all caching nodes. See Cache Distribution Models for more 
information.
+  * We specify the cache name that corresponds an L2 cache region name (either 
a full class name or a full association name).
+  * We use `TRANSACTIONAL` atomicity mode to take advantage of cache 
transactions.
+  * We enable `FULL_SYNC` to be always fully synchronized with backup nodes.
+
+Additionally, we specify a cache for update timestamps, which may be `ATOMIC`, 
for better performance.
+
+Having configured Ignite caching node, we can start it from within our code 
the following way:
+[block:code]
+{
+  "codes": [
+    {
+      "code": 
"Ignition.start(\"my-config-folder/my-ignite-configuration.xml\");",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+After the above line is executed, the internal Ignite node is started and is 
ready to cache the data. We can also start additional standalone nodes by 
running the following command from console:
+
+[block:code]
+{
+  "codes": [
+    {
+      "code": "$IGNITE_HOME/bin/ignite.sh 
my-config-folder/my-ignite-configuration.xml",
+      "language": "text"
+    }
+  ]
+}
+[/block]
+For Windows, use the `.bat` script in the same folder.
+[block:callout]
+{
+  "type": "success",
+  "body": "The nodes may be started on other hosts as well, forming a 
distributed caching cluster. Be sure to specify the right network settings in 
GridGain configuration file for that."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Query Cache"
+}
+[/block]
+In addition to L2 cache, Hibernate offers a query cache. This cache stores the 
results of queries (either HQL or Criteria) with a given set of parameters, so, 
when you repeat the query with the same parameter set, it hits the cache 
without going to the database. 
+
+Query cache may be useful if you have a number of queries, which may repeat 
with the same parameter values. Like in case of L2 cache, Hibernate relies on a 
3-rd party cache implementation, and Ignite In-Memory Data Fabric can be used 
as such.
+[block:callout]
+{
+  "type": "success",
+  "body": "Consider using support for [SQL-based In-Memory 
Queries](/docs/cache-queries) in Ignite which should perform faster than going 
through Hibernate."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Query Cache Configuration"
+}
+[/block]
+The [configuration](#l2-cache-configuration) information above totally applies 
to query cache, but some additional configuration and code change is required.
+
+##Hibernate Configuration
+To enable query cache in Hibernate, you only need one additional line in 
configuration file:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<!-- Enable query cache. -->\n<property 
name=\"cache.use_query_cache\">true</property>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+Yet, a code modification is required: for each query that you want to cache, 
you should enable `cacheable` flag by calling `setCacheable(true)`:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Session ses = ...;\n \n// Create Criteria query.\nCriteria 
criteria = ses.createCriteria(cls);\n \n// Enable cacheable 
flag.\ncriteria.setCacheable(true);\n \n...",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+After this is done, your query results will be cached.
+
+##Ignite Configuration
+To enable Hibernate query caching in Ignite, you need to specify an additional 
cache configuration:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "\n<property name=\"cacheConfiguration\">\n    <list>\n        
...\n        <!-- Query cache (refers to atomic cache defined in above 
example). -->\n        <bean parent=\"atomic-cache\">\n            <property 
name=\"name\" value=\"org.hibernate.cache.internal.StandardQueryCache\"/>\n     
   </bean>\n    </list>\n</property>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+Notice that the cache is made `ATOMIC` for better performance.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/data-grid/jcache.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/jcache.md b/docs/wiki/data-grid/jcache.md
new file mode 100755
index 0000000..3b0b6cf
--- /dev/null
+++ b/docs/wiki/data-grid/jcache.md
@@ -0,0 +1,99 @@
+Apache Ignite data grid is an implementation of JCache (JSR 107) specification 
(currently undergoing JSR107 TCK testing). JCache provides a very simple to 
use, but yet very powerful API for data access. However, the specification 
purposely omits any details about data distribution and consistency to allow 
vendors enough freedom in their own implementations. 
+
+In addition to JCache, Ignite provides ACID transactions, data querying 
capabilities (including SQL), various memory models, queries, transactions, 
etc...
+[block:api-header]
+{
+  "type": "basic",
+  "title": "IgniteCache"
+}
+[/block]
+`IgniteCache` is based on **JCache (JSR 107)**, so at the very basic level the 
APIs can be reduced to `javax.cache.Cache` interface. However, `IgniteCache` 
API also provides functionality that facilitates features outside of JCache 
spec, like data loading, querying, asynchronous mode, etc.
+
+You can get an instance of `IgniteCache` directly from `Ignite`:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteCache cache = 
ignite.jcache(\"mycache\");",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Basic Operations"
+}
+[/block]
+Here are some basic JCache atomic operation examples.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "try (Ignite ignite = 
Ignition.start(\"examples/config/example-cache.xml\")) {\n    
IgniteCache<Integer, String> cache = ignite.cache(CACHE_NAME);\n \n    // Store 
keys in cache (values will end up on different cache nodes).\n    for (int i = 
0; i < 10; i++)\n        cache.put(i, Integer.toString(i));\n \n    for (int i 
= 0; i < 10; i++)\n        System.out.println(\"Got [key=\" + i + \", val=\" + 
cache.get(i) + ']');\n}",
+      "language": "java",
+      "name": "Put & Get"
+    },
+    {
+      "code": "// Put-if-absent which returns previous value.\nInteger oldVal 
= cache.getAndPutIfAbsent(\"Hello\", 11);\n  \n// Put-if-absent which returns 
boolean success flag.\nboolean success = cache.putIfAbsent(\"World\", 22);\n  
\n// Replace-if-exists operation (opposite of getAndPutIfAbsent), returns 
previous value.\noldVal = cache.getAndReplace(\"Hello\", 11);\n \n// 
Replace-if-exists operation (opposite of putIfAbsent), returns boolean success 
flag.\nsuccess = cache.replace(\"World\", 22);\n  \n// Replace-if-matches 
operation.\nsuccess = cache.replace(\"World\", 2, 22);\n  \n// 
Remove-if-matches operation.\nsuccess = cache.remove(\"Hello\", 1);",
+      "language": "java",
+      "name": "Atomic"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "EntryProcessor"
+}
+[/block]
+Whenever doing `puts` and `updates` in cache, you are usually sending full 
state object state across the network. `EntryProcessor` allows for processing 
data directly on primary nodes, often transferring only the deltas instead of 
the full state. 
+
+Moreover, you can embed your own logic into `EntryProcessors`, for example, 
taking previous cached value and incrementing it by 1.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteCache<String, Integer> cache = 
ignite.jcache(\"mycache\");\n\n// Increment cache value 10 times.\nfor (int i = 
0; i < 10; i++)\n  cache.invoke(\"mykey\", (entry, args) -> {\n    Integer val 
= entry.getValue();\n\n    entry.setValue(val == null ? 1 : val + 1);\n\n    
return null;\n  });",
+      "language": "java",
+      "name": "invoke"
+    },
+    {
+      "code": "IgniteCache<String, Integer> cache = 
ignite.jcache(\"mycache\");\n\n// Increment cache value 10 times.\nfor (int i = 
0; i < 10; i++)\n  cache.invoke(\"mykey\", new EntryProcessor<String, Integer, 
Void>() {\n    @Override \n    public Object process(MutableEntry<Integer, 
String> entry, Object... args) {\n      Integer val = entry.getValue();\n\n     
 entry.setValue(val == null ? 1 : val + 1);\n\n      return null;\n    }\n  
});",
+      "language": "java",
+      "name": "java7 invoke"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "info",
+  "title": "Atomicity",
+  "body": "`EntryProcessors` are executed atomically within a lock on the 
given cache key."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Asynchronous Support"
+}
+[/block]
+Just like all distributed APIs in Ignite, `IgniteCache` extends 
[IgniteAsynchronousSupport](doc:async-support) interface and can be used in 
asynchronous mode.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "// Enable asynchronous mode.\nIgniteCache<String, Integer> 
asyncCache = ignite.jcache(\"mycache\").withAsync();\n\n// Asynhronously store 
value in cache.\nasyncCache.getAndPut(\"1\", 1);\n\n// Get future for the above 
invocation.\nIgniteFuture<Integer> fut = asyncCache.future();\n\n// 
Asynchronously listen for the operation to complete.\nfut.listenAsync(f -> 
System.out.println(\"Previous cache value: \" + f.get()));",
+      "language": "java",
+      "name": "Async"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/data-grid/off-heap-memory.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/off-heap-memory.md 
b/docs/wiki/data-grid/off-heap-memory.md
new file mode 100755
index 0000000..d7e1be6
--- /dev/null
+++ b/docs/wiki/data-grid/off-heap-memory.md
@@ -0,0 +1,180 @@
+Off-Heap memory allows your cache to overcome lengthy JVM Garbage Collection 
(GC) pauses when working with large heap sizes by caching data outside of main 
Java Heap space, but still in RAM.
+[block:image]
+{
+  "images": [
+    {
+      "image": [
+        "https://www.filepicker.io/api/file/iXCEc4RsQ4a1SM1Vfjnl";,
+        "off-heap-memory.png",
+        "450",
+        "354",
+        "#6c521f",
+        ""
+      ]
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "info",
+  "title": "Off-Heap Indexes",
+  "body": "Note that when off-heap memory is configured, Ignite will also 
store query indexes off-heap as well. This means that indexes will not take any 
portion of on-heap memory."
+}
+[/block]
+
+[block:callout]
+{
+  "type": "success",
+  "body": "You can also manage GC pauses by starting multiple processes with 
smaller heap on the same physical server. However, such approach is wasteful 
when using REPLICATED caches as we will end up with caching identical 
*replicated* data for every started JVM process.",
+  "title": "Off-Heap Memory vs. Multiple Processes"
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Tiered Off-Heap Storage"
+}
+[/block]
+Ignite provides tiered storage model, where data can be stored and moved 
between **on-heap**, **off-heap**, and **swap space**. Going up the tier 
provides more data storage capacity, with gradual increase in latency. 
+
+Ignite provides three types of memory modes, defined in `CacheMemoryMode`, for 
storing cache entries, supporting tiered storage model:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Memory Mode",
+    "h-1": "Description",
+    "0-0": "`ONHEAP_TIERED`",
+    "0-1": "Store entries on-heap and evict to off-heap and optionally to 
swap.",
+    "1-0": "`OFFHEAP_TIERED`",
+    "2-0": "`OFFHEAP_VALUES`",
+    "1-1": "Store entries off-heap, bypassing on-heap and optionally evicting 
to swap.",
+    "2-1": "Store keys on-heap and values off-heap."
+  },
+  "cols": 2,
+  "rows": 3
+}
+[/block]
+Cache can be configured to use any of the three modes by setting the 
`memoryMode` configuration property of `CacheConfiguration`, as described below.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "ONHEAP_TIERED"
+}
+[/block]
+In Ignite, `ONHEAP_TIERED`  is the default memory mode, where all cache 
entries are stored on-heap. Entries can be moved from on-heap to off-heap 
storage and later to swap space, if one is configured.
+
+To configure `ONHEAP_TIERED` memory mode, you need to:
+
+1. Set `memoryMode` property of `CacheConfiguration` to `ONHEAP_TIERED`. 
+2. Enable off-heap memory (optionally).
+3. Configure *eviction policy* for on-heap memory.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- 
Store cache entries on-heap. -->\n  <property name=\"memoryMode\" 
value=\"ONHEAP_TIERED\"/> \n\n  <!-- Enable Off-Heap memory with max size of 10 
Gigabytes (0 for unlimited). -->\n  <property name=\"offHeapMaxMemory\" 
value=\"#{10 * 1024L * 1024L * 1024L}\"/>\n\n  <!-- Configure eviction policy. 
-->\n  <property name=\"evictionPolicy\">\n    <bean 
class=\"org.apache.ignite.cache.eviction.fifo.CacheFifoEvictionPolicy\">\n      
<!-- Evict to off-heap after cache size reaches maxSize. -->\n      <property 
name=\"maxSize\" value=\"100000\"/>\n    </bean>\n  </property>\n  
...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setMemoryMode(CacheMemoryMode.ONHEAP_TIERED);\n\n//
 Set off-heap memory to 10GB (0 for unlimited)\ncacheCfg.setOffHeapMaxMemory(10 
* 1024L * 1024L * 1024L);\n\nCacheFifoEvictionPolicy evctPolicy = new 
CacheFifoEvictionPolicy();\n\n// Store only 100,000 entries 
on-heap.\nevctPolicy.setMaxSize(100000);\n\ncacheCfg.setEvictionPolicy(evctPolicy);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "warning",
+  "body": "Note that if you do not enable eviction policy in ONHEAP_TIERED 
mode, data will never be moved from on-heap to off-heap memory.",
+  "title": "Eviction Policy"
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "OFFHEAP_TIERED"
+}
+[/block]
+This memory mode allows you to configure your cache to store entries directly 
into off-heap storage, bypassing on-heap memory. Since all entries are stored 
off-heap, there is no need to explicitly configure an eviction policy. If 
off-heap storage size is exceeded (0 for unlimited), then LRU eviction policy 
is used to evict entries from off-heap store and optionally moving them to swap 
space, if one is configured.
+
+To configure `OFFHEAP_TIERED` memory mode, you need to:
+
+1. Set `memoryMode` property of `CacheConfiguration` to `OFFHEAP_TIERED`. 
+2. Enable off-heap memory (optionally).
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- 
Always store cache entries in off-heap memory. -->\n  <property 
name=\"memoryMode\" value=\"OFFHEAP_TIERED\"/>\n\n  <!-- Enable Off-Heap memory 
with max size of 10 Gigabytes (0 for unlimited). -->\n  <property 
name=\"offHeapMaxMemory\" value=\"#{10 * 1024L * 1024L * 1024L}\"/>\n  
...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);\n\n//
 Set off-heap memory to 10GB (0 for unlimited)\ncacheCfg.setOffHeapMaxMemory(10 
* 1024L * 1024L * 1024L);\n\nIgniteConfiguration cfg = new 
IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start 
Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "OFFHEAP_VALUES"
+}
+[/block]
+Setting this memory mode allows you to store keys on-heap and values off-heap. 
This memory mode is useful when keys are small and values are large.
+
+To configure `OFFHEAP_VALUES` memory mode, you need to:
+
+1. Set `memoryMode` property of `CacheConfiguration` to `OFFHEAP_VALUES`. 
+2. Enable off-heap memory.
+3. Configure *eviction policy* for on-heap memory (optionally).
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- 
Always store cache entries in off-heap memory. -->\n  <property 
name=\"memoryMode\" value=\"OFFHEAP_VALUES\"/>\n\n  <!-- Enable Off-Heap memory 
with max size of 10 Gigabytes (0 for unlimited). -->\n  <property 
name=\"offHeapMaxMemory\" value=\"#{10 * 1024L * 1024L * 1024L}\"/>\n  
...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_VALUES);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Swap Space"
+}
+[/block]
+Whenever your data set exceeds the limits of on-heap and off-heap memory, you 
can configure swap space in which case Ignite will evict entries to the disk 
instead of discarding them.
+[block:callout]
+{
+  "type": "warning",
+  "title": "Swap Space Performance",
+  "body": "Since swap space is on-disk, it is significantly slower than 
on-heap or off-heap memory."
+}
+[/block]
+
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- 
Enable swap. -->\n  <property name=\"swapEnabled\" value=\"true\"/> \n  
...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setSwapEnabled(true);\n\nIgniteConfiguration 
cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/data-grid/persistent-store.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/persistent-store.md 
b/docs/wiki/data-grid/persistent-store.md
new file mode 100755
index 0000000..d6cb5f2
--- /dev/null
+++ b/docs/wiki/data-grid/persistent-store.md
@@ -0,0 +1,111 @@
+JCache specification comes with APIs for 
[javax.cache.inegration.CacheLoader](https://ignite.incubator.apache.org/jcache/1.0.0/javadoc/javax/cache/integration/CacheLoader.html)
 and 
[javax.cache.inegration.CacheWriter](https://ignite.incubator.apache.org/jcache/1.0.0/javadoc/javax/cache/integration/CacheWriter.html)
 which are used for **write-through** and **read-through** to and from an 
underlying persistent storage respectively (e.g. an RDBMS database like Oracle 
or MySQL, or NoSQL database like MongoDB or Couchbase).
+
+While Ignite allows you to configure the `CacheLoader` and `CacheWriter` 
separately, it is very awkward to implement a transactional store within 2 
separate classes, as multiple `load` and `put` operations have to share the 
same connection within the same transaction. To mitigate that, Ignite provides 
`org.apache.ignite.cache.store.CacheStore` interface which extends both, 
`CacheLoader` and `CacheWriter`. 
+[block:callout]
+{
+  "type": "info",
+  "title": "Transactions",
+  "body": "`CacheStore` is fully transactional and automatically merges into 
the ongoing cache transaction."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "CacheStore"
+}
+[/block]
+`CacheStore` interface in Ignite is used to write and load data to and from 
the underlying data store. In addition to standard JCache loading and storing 
methods, it also introduces end-of-transaction demarcation and ability to bulk 
load a cache from the underlying data store.
+
+## loadCache()
+`CacheStore.loadCache()` method allows for cache loading even without passing 
all the keys that need to be loaded. It is generally used for hot-loading the 
cache on startup, but can be also called at any point after the cache has been 
started.
+
+`IgniteCache.loadCache()` method will delegate to `CacheStore.loadCache()` 
method on every cluster member that is running the cache. To invoke loading 
only on the local cluster node, use `IgniteCache.localLoadCache()` method.
+[block:callout]
+{
+  "type": "info",
+  "body": "In case of partitioned caches, keys that are not mapped to this 
node, either as primary or backups, will be automatically discarded by the 
cache."
+}
+[/block]
+## load(), write(), delete()
+Methods `load()`, `write()`, and `delete()` on the `CacheStore` are called 
whenever methods `get()`, `put()`, and `remove()` are called correspondingly on 
the `IgniteCache` interface. These methods are used to enable **read-through** 
and **write-through** behavior when working with individual cache entries.
+
+## loadAll(), writeAll(), deleteAll()
+Methods `loadAll()`, `writeAll()`, and `deleteAll()` on the `CacheStore` are 
called whenever methods `getAll()`, `putAll()`, and `removeAll()` are called 
correspondingly on the `IgniteCache` interface. These methods are used to 
enable **read-through** and **write-through** behavior when working with 
multiple cache entries and should generally be implemented using batch 
operations to provide better performance.
+[block:callout]
+{
+  "type": "info",
+  "title": "",
+  "body": "`CacheStoreAdapter` provides default implementation for 
`loadAll()`, `writeAll()`, and `deleteAll()` methods which simply iterates 
through all keys one by one."
+}
+[/block]
+## sessionEnd()
+Ignite has a concept of store session which may span more than one cache store 
operation. Sessions are especially useful when working with transactions.
+
+In case of `ATOMIC` caches, method `sessionEnd()` is called after completion 
of each `CacheStore` method. In case of `TRANSACTIONAL` caches, `sessionEnd()` 
is called at the end of each transaction, which allows to either commit or 
rollback multiple operations on the underlying persistent store.
+[block:callout]
+{
+  "type": "info",
+  "body": "`CacheStoreAdapater` provides default empty implementation of 
`sessionEnd()` method."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "CacheStoreSession"
+}
+[/block]
+The main purpose of cache store session is to hold the context between 
multiple store invocations whenever `CacheStore` is used in a cache 
transaction. For example, if using JDBC, you can store the ongoing database 
connection via `CacheStoreSession.attach()` method. You can then commit this 
connection in the `CacheStore#sessionEnd(boolean)` method.
+
+`CacheStoreSession` can be injected into your cache store implementation via 
`@GridCacheStoreSessionResource` annotation.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "CacheStore Example"
+}
+[/block]
+Below are a couple of different possible cache store implementations. Note 
that transactional implementation works with and without transactions.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "public class CacheJdbcPersonStore extends 
CacheStoreAdapter<Long, Person> {\n  // This mehtod is called whenever 
\"get(...)\" methods are called on IgniteCache.\n  @Override public Person 
load(Long key) {\n    try (Connection conn = connection()) {\n      try 
(PreparedStatement st = conn.prepareStatement(\"select * from PERSONS where 
id=?\")) {\n        st.setLong(1, key);\n\n        ResultSet rs = 
st.executeQuery();\n\n        return rs.next() ? new Person(rs.getLong(1), 
rs.getString(2), rs.getString(3)) : null;\n      }\n    }\n    catch 
(SQLException e) {\n      throw new CacheLoaderException(\"Failed to load: \" + 
key, e);\n    }\n  }\n\n  // This mehtod is called whenever \"put(...)\" 
methods are called on IgniteCache.\n  @Override public void 
write(Cache.Entry<Long, Person> entry) {\n    try (Connection conn = 
connection()) {\n      // Syntax of MERGE statement is database specific and 
should be adopted for your database.\n      // If your database does not supp
 ort MERGE statement then use sequentially update, insert statements.\n      
try (PreparedStatement st = conn.prepareStatement(\n        \"merge into 
PERSONS (id, firstName, lastName) key (id) VALUES (?, ?, ?)\")) {\n        for 
(Cache.Entry<Long, Person> entry : entries) {\n          Person val = 
entry.getValue();\n          \n          st.setLong(1, entry.getKey());\n       
   st.setString(2, val.getFirstName());\n          st.setString(3, 
val.getLastName());\n          \n          st.executeUpdate();\n        }\n     
 }\n    }\n    catch (SQLException e) {\n      throw new 
CacheWriterException(\"Failed to write [key=\" + key + \", val=\" + val + ']', 
e);\n    }\n  }\n\n  // This mehtod is called whenever \"remove(...)\" methods 
are called on IgniteCache.\n  @Override public void delete(Object key) {\n    
try (Connection conn = connection()) {\n      try (PreparedStatement st = 
conn.prepareStatement(\"delete from PERSONS where id=?\")) {\n        
st.setLong(1, (Long)key);\n\n      
   st.executeUpdate();\n      }\n    }\n    catch (SQLException e) {\n      
throw new CacheWriterException(\"Failed to delete: \" + key, e);\n    }\n  
}\n\n  // This mehtod is called whenever \"loadCache()\" and 
\"localLoadCache()\"\n  // methods are called on IgniteCache. It is used for 
bulk-loading the cache.\n  // If you don't need to bulk-load the cache, skip 
this method.\n  @Override public void loadCache(IgniteBiInClosure<Long, Person> 
clo, Object... args) {\n    if (args == null || args.length == 0 || args[0] == 
null)\n      throw new CacheLoaderException(\"Expected entry count parameter is 
not provided.\");\n\n    final int entryCnt = (Integer)args[0];\n\n    try 
(Connection conn = connection()) {\n      try (PreparedStatement st = 
conn.prepareStatement(\"select * from PERSONS\")) {\n        try (ResultSet rs 
= st.executeQuery()) {\n          int cnt = 0;\n\n          while (cnt < 
entryCnt && rs.next()) {\n            Person person = new Person(rs.getLong(1), 
rs.getString(2),
  rs.getString(3));\n\n            clo.apply(person.getId(), person);\n\n       
     cnt++;\n          }\n        }\n      }\n    }\n    catch (SQLException e) 
{\n      throw new CacheLoaderException(\"Failed to load values from cache 
store.\", e);\n    }\n  }\n\n  // Open JDBC connection.\n  private Connection 
connection() throws SQLException  {\n    // Open connection to your RDBMS 
systems (Oracle, MySQL, Postgres, DB2, Microsoft SQL, etc.)\n    // In this 
example we use H2 Database for simplification.\n    Connection conn = 
DriverManager.getConnection(\"jdbc:h2:mem:example;DB_CLOSE_DELAY=-1\");\n\n    
conn.setAutoCommit(true);\n\n    return conn;\n  }\n}",
+      "language": "java",
+      "name": "jdbc non-transactional"
+    },
+    {
+      "code": "public class CacheJdbcPersonStore extends 
CacheStoreAdapter<Long, Person> {\n  /** Auto-injected store session. */\n  
@CacheStoreSessionResource\n  private CacheStoreSession ses;\n\n  // Complete 
transaction or simply close connection if there is no transaction.\n  @Override 
public void sessionEnd(boolean commit) {\n    try (Connection conn = 
ses.getAttached()) {\n      if (conn != null && ses.isWithinTransaction()) {\n  
      if (commit)\n          conn.commit();\n        else\n          
conn.rollback();\n      }\n    }\n    catch (SQLException e) {\n      throw new 
CacheWriterException(\"Failed to end store session.\", e);\n    }\n  }\n\n  // 
This mehtod is called whenever \"get(...)\" methods are called on 
IgniteCache.\n  @Override public Person load(Long key) {\n    try (Connection 
conn = connection()) {\n      try (PreparedStatement st = 
conn.prepareStatement(\"select * from PERSONS where id=?\")) {\n        
st.setLong(1, key);\n\n        ResultSet rs = st.execut
 eQuery();\n\n        return rs.next() ? new Person(rs.getLong(1), 
rs.getString(2), rs.getString(3)) : null;\n      }\n    }\n    catch 
(SQLException e) {\n      throw new CacheLoaderException(\"Failed to load: \" + 
key, e);\n    }\n  }\n\n  // This mehtod is called whenever \"put(...)\" 
methods are called on IgniteCache.\n  @Override public void 
write(Cache.Entry<Long, Person> entry) {\n    try (Connection conn = 
connection()) {\n      // Syntax of MERGE statement is database specific and 
should be adopted for your database.\n      // If your database does not 
support MERGE statement then use sequentially update, insert statements.\n      
try (PreparedStatement st = conn.prepareStatement(\n        \"merge into 
PERSONS (id, firstName, lastName) key (id) VALUES (?, ?, ?)\")) {\n        for 
(Cache.Entry<Long, Person> entry : entries) {\n          Person val = 
entry.getValue();\n          \n          st.setLong(1, entry.getKey());\n       
   st.setString(2, val.getFirstName());\n       
    st.setString(3, val.getLastName());\n          \n          
st.executeUpdate();\n        }\n      }\n    }        \n    catch (SQLException 
e) {\n      throw new CacheWriterException(\"Failed to write [key=\" + key + 
\", val=\" + val + ']', e);\n    }\n  }\n\n  // This mehtod is called whenever 
\"remove(...)\" methods are called on IgniteCache.\n  @Override public void 
delete(Object key) {\n    try (Connection conn = connection()) {\n      try 
(PreparedStatement st = conn.prepareStatement(\"delete from PERSONS where 
id=?\")) {\n        st.setLong(1, (Long)key);\n\n        st.executeUpdate();\n  
    }\n    }\n    catch (SQLException e) {\n      throw new 
CacheWriterException(\"Failed to delete: \" + key, e);\n    }\n  }\n\n  // This 
mehtod is called whenever \"loadCache()\" and \"localLoadCache()\"\n  // 
methods are called on IgniteCache. It is used for bulk-loading the cache.\n  // 
If you don't need to bulk-load the cache, skip this method.\n  @Override public 
void loadCache(Ignit
 eBiInClosure<Long, Person> clo, Object... args) {\n    if (args == null || 
args.length == 0 || args[0] == null)\n      throw new 
CacheLoaderException(\"Expected entry count parameter is not provided.\");\n\n  
  final int entryCnt = (Integer)args[0];\n\n    try (Connection conn = 
connection()) {\n      try (PreparedStatement st = 
conn.prepareStatement(\"select * from PERSONS\")) {\n        try (ResultSet rs 
= st.executeQuery()) {\n          int cnt = 0;\n\n          while (cnt < 
entryCnt && rs.next()) {\n            Person person = new Person(rs.getLong(1), 
rs.getString(2), rs.getString(3));\n\n            clo.apply(person.getId(), 
person);\n\n            cnt++;\n          }\n        }\n      }\n    }\n    
catch (SQLException e) {\n      throw new CacheLoaderException(\"Failed to load 
values from cache store.\", e);\n    }\n  }\n\n  // Opens JDBC connection and 
attaches it to the ongoing\n  // session if within a transaction.\n  private 
Connection connection() throws SQLException  {\
 n    if (ses.isWithinTransaction()) {\n      Connection conn = 
ses.getAttached();\n\n      if (conn == null) {\n        conn = 
openConnection(false);\n\n        // Store connection in the session, so it can 
be accessed\n        // for other operations within the same transaction.\n     
   ses.attach(conn);\n      }\n\n      return conn;\n    }\n    // Transaction 
can be null in case of simple load or put operation.\n    else\n      return 
openConnection(true);\n  }\n\n  // Opens JDBC connection.\n  private Connection 
openConnection(boolean autocommit) throws SQLException {\n    // Open 
connection to your RDBMS systems (Oracle, MySQL, Postgres, DB2, Microsoft SQL, 
etc.)\n    // In this example we use H2 Database for simplification.\n    
Connection conn = 
DriverManager.getConnection(\"jdbc:h2:mem:example;DB_CLOSE_DELAY=-1\");\n\n    
conn.setAutoCommit(autocommit);\n\n    return conn;\n  }\n}",
+      "language": "java",
+      "name": "jdbc transactional"
+    },
+    {
+      "code": "public class CacheJdbcPersonStore extends CacheStore<Long, 
Person> {\n  // Skip single operations and open connection methods.\n  // You 
can copy them from jdbc non-transactional or jdbc transactional examples.\n  
...\n  \n  // This mehtod is called whenever \"getAll(...)\" methods are called 
on IgniteCache.\n  @Override public Map<K, V> loadAll(Iterable<Long> keys) {\n  
  try (Connection conn = connection()) {\n      try (PreparedStatement st = 
conn.prepareStatement(\n        \"select firstName, lastName from PERSONS where 
id=?\")) {\n        Map<K, V> loaded = new HashMap<>();\n        \n        for 
(Long key : keys) {\n          st.setLong(1, key);\n          \n          
try(ResultSet rs = st.executeQuery()) {\n            if (rs.next())\n           
   loaded.put(key, new Person(key, rs.getString(1), rs.getString(2));\n         
 }\n        }\n\n        return loaded;\n      }\n    }\n    catch 
(SQLException e) {\n      throw new CacheLoaderException(\"Failed to loa
 dAll: \" + keys, e);\n    }\n  }\n  \n  // This mehtod is called whenever 
\"putAll(...)\" methods are called on IgniteCache.\n  @Override public void 
writeAll(Collection<Cache.Entry<Long, Person>> entries) {\n    try (Connection 
conn = connection()) {\n      // Syntax of MERGE statement is database specific 
and should be adopted for your database.\n      // If your database does not 
support MERGE statement then use sequentially update, insert statements.\n      
try (PreparedStatement st = conn.prepareStatement(\n        \"merge into 
PERSONS (id, firstName, lastName) key (id) VALUES (?, ?, ?)\")) {\n        for 
(Cache.Entry<Long, Person> entry : entries) {\n          Person val = 
entry.getValue();\n          \n          st.setLong(1, entry.getKey());\n       
   st.setString(2, val.getFirstName());\n          st.setString(3, 
val.getLastName());\n          \n          st.addBatch();\n        }\n        
\n\t\t\t\tst.executeBatch();\n      }\n    }\n    catch (SQLException e) {\n    
  th
 row new CacheWriterException(\"Failed to writeAll: \" + entries, e);\n    }\n  
}\n  \n  // This mehtod is called whenever \"removeAll(...)\" methods are 
called on IgniteCache.\n  @Override public void deleteAll(Collection<Long> 
keys) {\n    try (Connection conn = connection()) {\n      try 
(PreparedStatement st = conn.prepareStatement(\"delete from PERSONS where 
id=?\")) {\n        for (Long key : keys) {\n          st.setLong(1, key);\n    
      \n          st.addBatch();\n        }\n        
\n\t\t\t\tst.executeBatch();\n      }\n    }\n    catch (SQLException e) {\n    
  throw new CacheWriterException(\"Failed to deleteAll: \" + keys, e);\n    }\n 
 }\n}",
+      "language": "java",
+      "name": "jdbc bulk operations"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+`CacheStore` interface can be set on `IgniteConfiguration` via a `Factory` in 
much the same way like `CacheLoader` and `CacheWriter` are being set.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  ...\n    
<property name=\"cacheConfiguration\">\n      <list>\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          ...\n  
        <property name=\"cacheStoreFactory\">\n            <bean 
class=\"javax.cache.configuration.FactoryBuilder$SingletonFactory\">\n          
    <constructor-arg>\n                <bean class=\"foo.bar.MyPersonStore\">\n 
   \t\t\t\t\t\t\t...\n    \t\t\t\t\t\t</bean>\n    
\t\t\t\t\t</constructor-arg>\n    \t\t\t\t</bean>\n\t    \t\t</property>\n    
\t\t\t...\n    \t\t</bean>\n    \t</list>\n    </property>\n  ...\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "IgniteConfiguration cfg = new 
IgniteConfiguration();\n\nCacheConfiguration<Long, Person> cacheCfg = new 
CacheConfiguration<>();\n\nCacheStore<Long, Person> store;\n\nstore = new 
MyPersonStore();\n\ncacheCfg.setCacheStoreFactory(new 
FactoryBuilder.SingletonFactory<>(store));\ncacheCfg.setReadThrough(true);\ncacheCfg.setWriteThrough(true);\n\ncfg.setCacheConfiguration(cacheCfg);\n\n//
 Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/data-grid/rebalancing.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/rebalancing.md 
b/docs/wiki/data-grid/rebalancing.md
new file mode 100755
index 0000000..99d50fd
--- /dev/null
+++ b/docs/wiki/data-grid/rebalancing.md
@@ -0,0 +1,105 @@
+When a new node joins topology, existing nodes relinquish primary or back up 
ownership of some keys to the new node so that keys remain equally balanced 
across the grid at all times.
+
+If the new node becomes a primary or backup for some partition, it will fetch 
data from previous primary node for that partition or from one of the backup 
nodes for that partition. Once a partition is fully loaded to the new node, it 
will be marked obsolete on the old node and will be eventually evicted after 
all current transactions on that node are finished. Hence, for some short 
period of time, after topology changes, there can be a case when a cache will 
have more backup copies for a key than configured. However once rebalancing 
completes, extra backup copies will be removed from node caches.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Preload Modes"
+}
+[/block]
+Following preload modes are defined in `CachePreloadMode` enum.
+[block:parameters]
+{
+  "data": {
+    "0-0": "`SYNC`",
+    "h-0": "CachePreloadMode",
+    "h-1": "Description",
+    "0-1": "Synchronous rebalancing mode. Distributed caches will not start 
until all necessary data is loaded from other available grid nodes. This means 
that any call to cache public API will be blocked until rebalancing is 
finished.",
+    "1-1": "Asynchronous rebalancing mode. Distributed caches will start 
immediately and will load all necessary data from other available grid nodes in 
the background.",
+    "1-0": "`ASYNC`",
+    "2-1": "In this mode no rebalancing will take place which means that 
caches will be either loaded on demand from persistent store whenever data is 
accessed, or will be populated explicitly.",
+    "2-0": "`NONE`"
+  },
+  "cols": 2,
+  "rows": 3
+}
+[/block]
+By default, `ASYNC` preload mode is enabled. To use another mode, you can set 
the `preloadMode` property of `CacheConfiguration`, like so:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">          \t\t\n   
       \t<!-- Set synchronous preloading. -->\n    \t\t\t\t<property 
name=\"preloadMode\" value=\"SYNC\"/>\n            ... \n        </bean\n    
</property>\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setPreloadMode(CachePreloadMode.SYNC);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Rebalance Message Throttling"
+}
+[/block]
+When re-balancer transfers data from one node to another, it splits the whole 
data set into batches and sends each batch in a separate message. If your data 
sets are large and there are a lot of messages to send, the CPU or network can 
get over-consumed. In this case it can be reasonable to wait between rebalance 
messages so that negative performance impact caused by preloading process is 
minimized. This time interval is controlled by `preloadThrottle` configuration 
property of  `CacheConfiguration`. Its default value is 0, which means that 
there will be no pauses between messages. Note that size of a single message 
can be also customized by `preloadBatchSize` configuration property (default 
size is 512K).
+
+For example, if you want preloader to send 2MB of data per message with 100 ms 
throttle interval, you should provide the following configuration: 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">          \t\t\n   
       \t<!-- Set batch size. -->\n    \t\t\t\t<property 
name=\"preloadBatchSize\" value=\"#{2 * 1024 * 1024}\"/>\n \n    \t\t\t\t<!-- 
Set throttle interval. -->\n    \t\t\t\t<property name=\"preloadThrottle\" 
value=\"100\"/>\n            ... \n        </bean\n    </property>\n</bean> ",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setPreloadBatchSize(2 * 1024 * 1024);\n       
     \ncacheCfg.setPreloadThrottle(100);\n\nIgniteConfiguration cfg = new 
IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start 
Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+Cache preloading behavior can be customized by optionally setting the 
following configuration properties:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Setter Method",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-0": "`setPreloadMode`",
+    "0-1": "Preload mode for distributed cache. See Preload Modes section for 
details.",
+    "1-0": "`setPreloadPartitionedDelay`",
+    "1-1": "Preloading delay in milliseconds. See Delayed And Manual 
Preloading section for details.",
+    "2-0": "`setPreloadBatchSize`",
+    "2-1": "Size (in bytes) to be loaded within a single preload message. 
Preloading algorithm will split total data set on every node into multiple 
batches prior to sending data.",
+    "3-0": "`setPreloadThreadPoolSize`",
+    "3-1": "Size of preloading thread pool. Note that size serves as a hint 
and implementation may create more threads for preloading than specified here 
(but never less threads).",
+    "4-0": "`setPreloadThrottle`",
+    "4-1": "Time in milliseconds to wait between preload messages to avoid 
overloading of CPU or network. When preloading large data sets, the CPU or 
network can get over-consumed with preloading messages, which consecutively may 
slow down the application performance. This parameter helps tune the amount of 
time to wait between preload messages to make sure that preloading process does 
not have any negative performance impact. Note that application will continue 
to work properly while preloading is still in progress.",
+    "5-0": "`setPreloadOrder`",
+    "6-0": "`setPreloadTimeout`",
+    "5-1": "Order in which preloading should be done. Preload order can be set 
to non-zero value for caches with SYNC or ASYNC preload modes only. Preloading 
for caches with smaller preload order will be completed first. By default, 
preloading is not ordered.",
+    "6-1": "Preload timeout (ms).",
+    "0-2": "`ASYNC`",
+    "1-2": "0 (no delay)",
+    "2-2": "512K",
+    "3-2": "2",
+    "4-2": "0 (throttling disabled)",
+    "5-2": "0",
+    "6-2": "10000"
+  },
+  "cols": 3,
+  "rows": 7
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/data-grid/transactions.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/transactions.md 
b/docs/wiki/data-grid/transactions.md
new file mode 100755
index 0000000..551d6b3
--- /dev/null
+++ b/docs/wiki/data-grid/transactions.md
@@ -0,0 +1,127 @@
+Ignite supports 2 modes for cache operation, *transactional* and *atomic*. In 
`transactional` mode you are able to group multiple cache operations in a 
transaction, while `atomic` mode supports multiple atomic operations, one at a 
time. `Atomic` mode is more light-weight and generally has better performance 
over `transactional` caches.
+
+However, regardless of which mode you use, as long as your cluster is alive, 
the data between different cluster nodes must remain consistent. This means 
that whichever node is being used to retrieve data, it will never get data that 
has been partially committed or that is inconsistent with other data.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "IgniteTransactions"
+}
+[/block]
+`IgniteTransactions` interface contains functionality for starting and 
completing transactions, as well as subscribing listeners or getting metrics.
+[block:callout]
+{
+  "type": "info",
+  "title": "Cross-Cache Transactions",
+  "body": "You can combine multiple operations from different caches into one 
transaction. Note that this allows to update caches of different types, like 
`REPLICATED` and `PARTITIONED` caches, in one transaction."
+}
+[/block]
+You can obtain an instance of `IgniteTransactions` as follows:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteTransactions 
transactions = ignite.transactions();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+Here is an example of how transactions can be performed in Ignite:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "try (Transaction tx = transactions.txStart()) {\n    Integer 
hello = cache.get(\"Hello\");\n  \n    if (hello == 1)\n        
cache.put(\"Hello\", 11);\n  \n    cache.put(\"World\", 22);\n  \n    
tx.commit();\n}",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Two-Phase-Commit (2PC)"
+}
+[/block]
+Ignite utilizes 2PC protocol for its transactions with many one-phase-commit 
optimizations whenever applicable. Whenever data is updated within a 
transaction, Ignite will keep transactional state in a local transaction map 
until `commit()` is called, at which point, if needed, the data is transferred 
to participating remote nodes.
+
+For more information on how Ignite 2PC works, you can check out these blogs:
+  * [Two-Phase-Commit for Distributed In-Memory 
Caches](http://gridgain.blogspot.com/2014/09/two-phase-commit-for-distributed-in.html)
+  *  [Two-Phase-Commit for In-Memory Caches - Part 
II](http://gridgain.blogspot.com/2014/09/two-phase-commit-for-in-memory-caches.html)
 
+  * [One-Phase-Commit - Fast Transactions For In-Memory 
Caches](http://gridgain.blogspot.com/2014/09/one-phase-commit-fast-transactions-for.html)
 
+[block:callout]
+{
+  "type": "success",
+  "body": "Ignite provides fully ACID (**A**tomicity, **C**onsistency, 
**I**solation, **D**urability) compliant transactions that ensure guaranteed 
consistency.",
+  "title": "ACID Compliance"
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Optimistic and Pessimistic"
+}
+[/block]
+Whenever `TRANSACTIONAL` atomicity mode is configured, Ignite supports 
`OPTIMISTIC` and `PESSIMISTIC` concurrency modes for transactions. The main 
difference is that in `PESSIMISTIC` mode locks are acquired at the time of 
access, while in `OPTIMISTIC` mode locks are acquired during the `commit` phase.
+
+Ignite also supports the following isolation levels:
+  * `READ_COMMITED` - data is always fetched from the primary node, even if it 
already has been accessed within the transaction.
+  * `REPEATABLE_READ` - data is fetched form the primary node only once on 
first access and stored in the local transactional map. All consecutive access 
to the same data is local.
+  * `SERIALIZABLE` - when combined with `OPTIMISTIC` concurrency, transactions 
may throw `TransactionOptimisticException` in case of concurrent updates. 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "IgniteTransactions txs = ignite.transactions();\n\n// Start 
transaction in pessimistic mode with repeatable read isolation 
level.\nTransaction tx = txs.txStart(TransactionConcurrency.OPTIMISTIC, 
TransactionIsolation.REPEATABLE_READ);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Atomicity Mode"
+}
+[/block]
+Ignite supports 2 atomicity modes defined in `CacheAtomicityMode` enum:
+  * `TRANSACTIONAL`
+  * `ATOMIC`
+
+`TRANSACTIONAL` mode enables fully ACID-compliant transactions, however, when 
only atomic semantics are needed, it is recommended that  `ATOMIC` mode is used 
for better performance.
+
+`ATOMIC` mode provides better performance by avoiding transactional locks, 
while still providing data atomicity and consistency. Another difference in 
`ATOMIC` mode is that bulk writes, such as `putAll(...)`and `removeAll(...)` 
methods are no longer executed in one transaction and can partially fail. In 
case of partial failure, `CachePartialUpdateException` will be thrown which 
will contain a list of keys for which the update failed.
+[block:callout]
+{
+  "type": "info",
+  "body": "Note that transactions are disabled whenever `ATOMIC` mode is used, 
which allows to achieve much higher performance and throughput in cases when 
transactions are not needed.",
+  "title": "Performance"
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+Atomicity mode is defined in `CacheAtomicityMode` enum and can be configured 
via `atomicityMode` property of `CacheConfiguration`. 
+
+Default atomicity mode is `ATOMIC`.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          \t<!-- 
Set a cache name. -->\n   \t\t\t\t\t<property name=\"name\" 
value=\"myCache\"/>\n\n            <!-- Set atomicity mode, can be ATOMIC or 
TRANSACTIONAL. -->\n    \t\t\t\t<property name=\"atomicityMode\" 
value=\"TRANSACTIONAL\"/>\n            ... \n        </bean\n    
</property>\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/data-grid/web-session-clustering.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/web-session-clustering.md 
b/docs/wiki/data-grid/web-session-clustering.md
new file mode 100755
index 0000000..ac0be3d
--- /dev/null
+++ b/docs/wiki/data-grid/web-session-clustering.md
@@ -0,0 +1,236 @@
+Ignite In-Memory Data Fabric is capable of caching web sessions of all Java 
Servlet containers that follow Java Servlet 3.0 Specification, including Apache 
Tomcat, Eclipse Jetty, Oracle WebLogic, and others.
+
+Web sessions caching becomes useful when running a cluster of app servers. 
When running a web application in a servlet container, you may face performance 
and scalability problems. A single app server is usually not able to handle 
large volumes of traffic by itself. A common solution is to scale your web 
application across multiple clustered instances:
+[block:image]
+{
+  "images": [
+    {
+      "image": [
+        "https://www.filepicker.io/api/file/AlvqqQhZRym15ji5iztA";,
+        "web_sessions_1.png",
+        "561",
+        "502",
+        "#7f9eaa",
+        ""
+      ]
+    }
+  ]
+}
+[/block]
+In the architecture shown above, High Availability Proxy (Load Balancer) 
distributes requests between multiple Application Server instances (App Server 
1, App Server 2, ...), reducing the load on each instance and providing service 
availability if any of the instances fails. The problem here is web session 
availability. A web session keeps an intermediate logical state between 
requests by using cookies, and is normally bound to a particular application 
instance. Generally this is handled using sticky connections, ensuring that 
requests from the same user are handled by the same app server instance. 
However, if that instance fails, the session is lost, and the user will have to 
create it anew, loosing all the current unsaved state:
+[block:image]
+{
+  "images": [
+    {
+      "image": [
+        "https://www.filepicker.io/api/file/KtAyyVzrQ5CwhxODgEVV";,
+        "web_sessions_2.png",
+        "561",
+        "502",
+        "#fb7661",
+        ""
+      ]
+    }
+  ]
+}
+[/block]
+A solution here is to use Ignite In-Memory Data Fabric web sessions cache - a 
distributed cache that maintains a copy of each created session, sharing them 
between all instances. If any of your application instances fails, Ignite will 
automatically restore the sessions, owned by the failed instance, from the 
distributed cache regardless of which app server the next request will be 
forwarded to. Moreover, with web session caching sticky connections become less 
important as the session is available on any app server the web request may be 
routed to.
+[block:image]
+{
+  "images": [
+    {
+      "image": [
+        "https://www.filepicker.io/api/file/8WyBbutWSm4PRYDNWRr7";,
+        "web_sessions_3.png",
+        "561",
+        "502",
+        "#f73239",
+        ""
+      ]
+    }
+  ]
+}
+[/block]
+In this chapter we give a brief architecture overview of Ignite's web session 
caching functionality and instructions on how to configure your web application 
to enable web sessions caching.
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Architecture"
+}
+[/block]
+To set up a distributed web sessions cache with Ignite, you normally configure 
your web application to start a Ignite node (embedded mode). When multiple 
application server instances are started, all Ignite nodes connect with 
each-other forming a distributed cache.
+[block:callout]
+{
+  "type": "info",
+  "body": "Note that not every Ignite caching node has to be running inside of 
application server. You can also start additional, standalone Ignite nodes and 
add them to the topology as well."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Replication Strategies"
+}
+[/block]
+There are several replication strategies you can use when storing sessions in 
Ignite In-Memory Data Fabric. The replication strategy is defined by the 
backing cache settings. In this section we briefly cover most common 
configurations.
+
+##Fully Replicated Cache
+This strategy stores copies of all sessions on each Ignite node, providing 
maximum availability. However with this approach you can only cache as many web 
sessions as can fit in memory on a single server. Additionally, the performance 
may suffer as every change of web session state now must be replicated to all 
other cluster nodes.
+
+To enable fully replicated strategy, set cacheMode of your backing cache to 
`REPLICATED`:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n    <!-- Cache 
mode. -->\n    <property name=\"cacheMode\" value=\"REPLICATED\"/>\n    
...\n</bean>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+##Partitioned Cache with Backups
+In partitioned mode, web sessions are split into partitions and every node is 
responsible for caching only partitions assigned to that node. With this 
approach, the more nodes you have, the more data can be cached. New nodes can 
always be added on the fly to add more memory.
+[block:callout]
+{
+  "type": "info",
+  "body": "With `Partitioned` mode, redundancy is addressed by configuring 
number of backups for every web session being cached."
+}
+[/block]
+To enable partitioned strategy, set cacheMode of your backing cache to 
`PARTITIONED`, and set the number of backups with `backups` property of 
`CacheConfiguration`:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n    <!-- Cache 
mode. -->\n    <property name=\"cacheMode\" value=\"PARTITIONED\"/>\n    
<property name=\"backups\" value=\"1\"/>\n</bean>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "info",
+  "body": "See [Cache Distribution Models](doc:cache-distribution-models) for 
more information on different replication strategies available in Ignite."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Expiration and Eviction"
+}
+[/block]
+Stale sessions are cleaned up from cache automatically when they expire. 
However, if there are a lot of long-living sessions created, you may want to 
save memory by evicting dispensable sessions from cache when cache reaches a 
certain limit. This can be done by setting up cache eviction policy and 
specifying the maximum number of sessions to be stored in cache. For example, 
to enable automatic eviction with LRU algorithm and a limit of 10000 sessions, 
you will need to use the following cache configuration:
+
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n    <!-- Cache 
name. -->\n    <property name=\"name\" value=\"session-cache\"/>\n \n    <!-- 
Set up LRU eviction policy with 10000 sessions limit. -->\n    <property 
name=\"evictionPolicy\">\n        <bean 
class=\"org.apache.ignite.cache.eviction.lru.CacheLruEvictionPolicy\">\n        
    <property name=\"maxSize\" value=\"10000\"/>\n        </bean>\n    
</property>\n    ...\n</bean>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+
+[block:callout]
+{
+  "type": "info",
+  "body": "For more information about various eviction policies, see Eviction 
Policies section."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Configuration"
+}
+[/block]
+To enable web session caching in your application with Ignite, you need to:
+
+1\. **Add Ignite JARs** - Download Ignite and add the following jars to your 
application’s classpath (`WEB_INF/libs` folder):
+  * `ignite.jar`
+  * `ignite-web.jar`
+  * `ignite-log4j.jar`
+  * `ignite-spring.jar`
+
+Or, if you have a Maven based project, add the following to your application's 
pom.xml.
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<dependency>\n      <groupId>org.ignite</groupId>\n      
<artifactId>ignite-fabric</artifactId>\n      <version> 
${ignite.version}</version>\n      
<type>pom</type>\n</dependency>\n\n<dependency>\n    
<groupId>org.ignite</groupId>\n    <artifactId>ignite-web</artifactId>\n    
<version> ${ignite.version}</version>\n</dependency>\n\n<dependency>\n    
<groupId>org.ignite</groupId>\n    <artifactId>ignite-log4j</artifactId>\n    
<version>${ignite.version}</version>\n</dependency>\n\n<dependency>\n    
<groupId>org.ignite</groupId>\n    <artifactId>ignite-spring</artifactId>\n    
<version>${ignite.version}</version>\n</dependency>",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+Make sure to replace ${ignite.version} with actual Ignite version.
+
+2\. **Configure Cache Mode** - Configure Ignite cache in either `PARTITIONED` 
or `REPLICATED` mode (See [examples](#replication-strategies) above).
+
+3\. **Update `web.xml`** - Declare a context listener and web session filter 
in `web.xml`:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "...\n\n<listener>\n   
<listener-class>org.apache.ignite.startup.servlet.IgniteServletContextListenerStartup</listener-class>\n</listener>\n\n<filter>\n
   <filter-name>IgniteWebSessionsFilter</filter-name>\n   
<filter-class>org.apache.ignite.cache.websession.IgniteWebSessionFilter</filter-class>\n</filter>\n\n<!--
 You can also specify a custom URL pattern. -->\n<filter-mapping>\n   
<filter-name>IgniteWebSessionsFilter</filter-name>\n   
<url-pattern>/*</url-pattern>\n</filter-mapping>\n\n<!-- Specify Ignite 
configuration (relative to META-INF folder or Ignite_HOME). 
-->\n<context-param>\n   <param-name>IgniteConfigurationFilePath</param-name>\n 
  <param-value>config/default-config.xml 
</param-value>\n</context-param>\n\n<!-- Specify the name of Ignite cache for 
web sessions. -->\n<context-param>\n   
<param-name>IgniteWebSessionsCacheName</param-name>\n   
<param-value>partitioned</param-value>\n</context-param>\n\n...",
+      "language": "xml"
+    }
+  ]
+}
+[/block]
+On application start, the listener will start a Ignite node within your 
application, which will connect to other nodes in the network, forming a 
distributed cache.
+
+4\. **Set Eviction Policy (Optional)** - Set eviction policy for stale web 
sessions data lying in cache (See [example](#expiration-and-eviction) above).
+
+##Configuration Parameters
+`IgniteServletContextListenerStartup` has the following configuration 
parameters:
+[block:parameters]
+{
+  "data": {
+    "0-0": "`IgniteConfigurationFilePath`",
+    "0-1": "Path to Ignite configuration file (relative to `META_INF` folder 
or `IGNITE_HOME`).",
+    "0-2": "`/config/default-config.xml`",
+    "h-2": "Default",
+    "h-1": "Description",
+    "h-0": "Parameter Name"
+  },
+  "cols": 3,
+  "rows": 1
+}
+[/block]
+`IgniteWebSessionFilter` has the following configuration parameters:
+[block:parameters]
+{
+  "data": {
+    "h-0": "Parameter Name",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-0": "`IgniteWebSessionsGridName`",
+    "0-1": "Grid name for a started Ignite node. Should refer to grid in 
configuration file (if a grid name is specified in configuration).",
+    "0-2": "null",
+    "1-0": "`IgniteWebSessionsCacheName`",
+    "2-0": "`IgniteWebSessionsMaximumRetriesOnFail`",
+    "1-1": "Name of Ignite cache to use for web sessions caching.",
+    "1-2": "null",
+    "2-1": "Valid only for `ATOMIC` caches. Specifies number of retries in 
case of primary node failures.",
+    "2-2": "3"
+  },
+  "cols": 3,
+  "rows": 3
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Supported Containers"
+}
+[/block]
+Ignite has been officially tested with following servlet containers:
+  * Apache Tomcat 7
+  * Eclipse Jetty 9
+  * Apache Tomcat 6
+  * Oracle WebLogic >= 10.3.4
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/distributed-data-structures/atomic-types.md
----------------------------------------------------------------------
diff --git a/docs/wiki/distributed-data-structures/atomic-types.md 
b/docs/wiki/distributed-data-structures/atomic-types.md
new file mode 100755
index 0000000..ff14c35
--- /dev/null
+++ b/docs/wiki/distributed-data-structures/atomic-types.md
@@ -0,0 +1,97 @@
+Ignite supports distributed ***atomic long*** and ***atomic reference*** , 
similar to `java.util.concurrent.atomic.AtomicLong` and 
`java.util.concurrent.atomic.AtomicReference` respectively. 
+
+Atomics in Ignite are distributed across the cluster, essentially enabling 
performing atomic operations (such as increment-and-get or compare-and-set) 
with the same globally-visible value. For example, you could update the value 
of an atomic long on one node and read it from another node.
+
+##Features
+  * Retrieve current value.
+  * Atomically modify current value.
+  * Atomically increment or decrement current value.
+  * Atomically compare-and-set the current value to new value.
+
+Distributed atomic long and atomic reference can be obtained via 
`IgniteAtomicLong` and `IgniteAtomicReference` interfaces respectively, as 
shown below:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n \nIgniteAtomicLong 
atomicLong = ignite.atomicLong(\n    \"atomicName\", // Atomic long name.\n    
0,        \t\t// Initial value.\n    false     \t\t// Create if it does not 
exist.\n)",
+      "language": "java",
+      "name": "AtomicLong"
+    },
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Create an 
AtomicReference.\nIgniteAtomicReference<Boolean> ref = 
ignite.atomicReference(\n    \"refName\",  // Reference name.\n    \"someVal\", 
 // Initial value for atomic reference.\n    true        // Create if it does 
not exist.\n);",
+      "language": "java",
+      "name": "AtomicReference"
+    }
+  ]
+}
+[/block]
+
+Below is a usage example of `IgniteAtomicLong` and `IgniteAtomicReference`:
+
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Initialize atomic 
long.\nfinal IgniteAtomicLong atomicLong = ignite.atomicLong(\"atomicName\", 0, 
true);\n\n// Increment atomic long on local 
node.\nSystem.out.println(\"Incremented value: \" + 
atomicLong.incrementAndGet());\n",
+      "language": "java",
+      "name": "AtomicLong"
+    },
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\n// Initialize atomic 
reference.\nIgniteAtomicReference<String> ref = 
ignite.atomicReference(\"refName\", \"someVal\", true);\n\n// Compare old value 
to new value and if they are equal,\n//only then set the old value to new 
value.\nref.compareAndSet(\"WRONG EXPECTED VALUE\", \"someNewVal\"); // Won't 
change.",
+      "language": "java",
+      "name": "AtomicReference"
+    }
+  ]
+}
+[/block]
+All atomic operations provided by `IgniteAtomicLong` and 
`IgniteAtomicReference` are synchronous. The time an atomic operation will take 
depends on the number of nodes performing concurrent operations with the same 
instance of atomic long, the intensity of these operations, and network latency.
+[block:callout]
+{
+  "type": "info",
+  "title": "",
+  "body": "`IgniteCache` interface has `putIfAbsent()` and `replace()` 
methods, which provide the same CAS functionality as atomic types."
+}
+[/block]
+
+[block:api-header]
+{
+  "type": "basic",
+  "title": "Atomic Configuration"
+}
+[/block]
+Atomics in Ignite can be configured via `atomicConfiguration` property of 
`IgniteConfiguration`. The following configuration parameters can be used :
+[block:parameters]
+{
+  "data": {
+    "0-0": "`setBackups(int)`",
+    "1-0": "`setCacheMode(CacheMode)`",
+    "2-0": "`setAtomicSequenceReserveSize(int)`",
+    "h-0": "Setter Method",
+    "h-1": "Description",
+    "h-2": "Default",
+    "0-1": "Set number of backups.",
+    "0-2": "0",
+    "1-1": "Set cache mode for all atomic types.",
+    "1-2": "`PARTITIONED`",
+    "2-1": "Sets the number of sequence values reserved for 
`IgniteAtomicSequence` instances.",
+    "2-2": "1000"
+  },
+  "cols": 3,
+  "rows": 3
+}
+[/block]
+##Example 
+[block:code]
+{
+  "codes": [
+    {
+      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    ...\n    
<property name=\"atomicConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.AtomicConfiguration\">\n            
<!-- Set number of backups. -->\n            <property name=\"backups\" 
value=\"1\"/>\n          \t\n          \t<!-- Set number of sequence values to 
be reserved. -->\n          \t<property name=\"atomicSequenceReserveSize\" 
value=\"5000\"/>\n        </bean>\n    </property>\n</bean>",
+      "language": "xml"
+    },
+    {
+      "code": "AtomicConfiguration atomicCfg = new AtomicConfiguration();\n 
\n// Set number of backups.\natomicCfg.setBackups(1);\n\n// Set number of 
sequence values to be reserved. 
\natomicCfg.setAtomicSequenceReserveSize(5000);\n\nIgniteConfiguration cfg = 
new IgniteConfiguration();\n  \n// Use atomic configuration in Ignite 
configuration.\ncfg.setAtomicConfiguration(atomicCfg);\n  \n// Start Ignite 
node.\nIgnition.start(cfg);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/af603c00/docs/wiki/distributed-data-structures/countdownlatch.md
----------------------------------------------------------------------
diff --git a/docs/wiki/distributed-data-structures/countdownlatch.md 
b/docs/wiki/distributed-data-structures/countdownlatch.md
new file mode 100755
index 0000000..4308ca3
--- /dev/null
+++ b/docs/wiki/distributed-data-structures/countdownlatch.md
@@ -0,0 +1,24 @@
+If you are familiar with `java.util.concurrent.CountDownLatch` for 
synchronization between threads within a single JVM, Ignite provides 
`IgniteCountDownLatch` to allow similar behavior across cluster nodes. 
+
+A distributed CountDownLatch in Ignite can be created as follows:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteCountDownLatch 
latch = ignite.countDownLatch(\n    \"latchName\", // Latch name.\n    10,      
  \t // Initial count.\n    false        // Auto remove, when counter has 
reached zero.\n    true         // Create if it does not exist.\n);",
+      "language": "java"
+    }
+  ]
+}
+[/block]
+After the above code is executed, all nodes in the specified cache will be 
able to synchronize on the latch named - `latchName`. Below is an example of 
such synchronization:
+[block:code]
+{
+  "codes": [
+    {
+      "code": "Ignite ignite = Ignition.ignite();\n\nfinal 
IgniteCountDownLatch latch = ignite.countDownLatch(\"latchName\", 10, false, 
true);\n\n// Execute jobs.\nfor (int i = 0; i < 10; i++)\n    // Execute a job 
on some remote cluster node.\n    ignite.compute().run(() -> {\n        int 
newCnt = latch.countDown();\n\n        System.out.println(\"Counted down: 
newCnt=\" + newCnt);\n    });\n\n// Wait for all jobs to 
complete.\nlatch.await();",
+      "language": "java"
+    }
+  ]
+}
+[/block]
\ No newline at end of file

Reply via email to