http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/data-grid/cache-modes.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/cache-modes.md 
b/wiki/documentation/data-grid/cache-modes.md
deleted file mode 100644
index 4bcb3c8..0000000
--- a/wiki/documentation/data-grid/cache-modes.md
+++ /dev/null
@@ -1,254 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite provides three different modes of cache operation: `LOCAL`, 
`REPLICATED`, and `PARTITIONED`. A cache mode is configured for each cache. 
Cache modes are defined in `CacheMode` enumeration. 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Local Mode"
-}
-[/block]
-`LOCAL` mode is the most light weight mode of cache operation, as no data is 
distributed to other cache nodes. It is ideal for scenarios where data is 
either read-only, or can be periodically refreshed at some expiration 
frequency. It also works very well with read-through behavior where data is 
loaded from persistent storage on misses. Other than distribution, local caches 
still have all the features of a distributed cache, such as automatic data 
eviction, expiration, disk swapping, data querying, and transactions.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Replicated Mode"
-}
-[/block]
-In `REPLICATED` mode all data is replicated to every node in the cluster. This 
cache mode provides the utmost availability of data as it is available on every 
node. However, in this mode every data update must be propagated to all other 
nodes which can have an impact on performance and scalability. 
-
-As the same data is stored on all cluster nodes, the size of a replicated 
cache is limited by the amount of memory available on the node with the 
smallest amount of RAM. This mode is ideal for scenarios where cache reads are 
a lot more frequent than cache writes, and data sets are small. If your system 
does cache lookups over 80% of the time, then you should consider using 
`REPLICATED` cache mode.
-[block:callout]
-{
-  "type": "success",
-  "body": "Replicated caches should be used when data sets are small and 
updates are infrequent."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Partitioned Mode"
-}
-[/block]
-`PARTITIONED` mode is the most scalable distributed cache mode. In this mode 
the overall data set is divided equally into partitions and all partitions are 
split equally between participating nodes, essentially creating one huge 
distributed in-memory store for caching data. This approach allows you to store 
as much data as can be fit in the total memory available across all nodes, 
hence allowing for multi-terabytes of data in cache memory across all cluster 
nodes. Essentially, the more nodes you have, the more data you can cache.
-
-Unlike `REPLICATED` mode, where updates are expensive because every node in 
the cluster needs to be updated, with `PARTITIONED` mode, updates become cheap 
because only one primary node (and optionally 1 or more backup nodes) need to 
be updated for every key. However, reads become somewhat more expensive because 
only certain nodes have the data cached. 
-
-In order to avoid extra data movement, it is important to always access the 
data exactly on the node that has that data cached. This approach is called 
*affinity colocation* and is strongly recommended when working with partitioned 
caches.
-[block:callout]
-{
-  "type": "success",
-  "body": "Partitioned caches are idea when working with large data sets and 
updates are frequent.",
-  "title": ""
-}
-[/block]
-The picture below illustrates a simple view of a partitioned cache. 
Essentially we have key K1 assigned to Node1, K2 assigned to Node2, and K3 
assigned to Node3. 
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/7pGSgxCVR3OZSHqYLdJv";,
-        "in_memory_data_grid.png",
-        "500",
-        "338",
-        "#d64304",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-See [configuration](#configuration) section below for an example on how to 
configure cache mode.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cache Distribution Mode"
-}
-[/block]
-Node can operate in four different cache distribution modes when `PARTITIONED` 
mode is used. Cache distribution mode is defined by `CacheDistributionMode` 
enumeration and can be configured via `distributionMode` property of 
`CacheConfiguration`.
-[block:parameters]
-{
-  "data": {
-    "h-0": "Distribution Mode",
-    "h-1": "Description",
-    "0-0": "`PARTITIONED_ONLY`",
-    "0-1": "Local node may store primary and/or backup keys, but does not 
cache recently accessed keys, which are neither primaries or backups, in near 
cache.",
-    "1-0": "`CLIENT_ONLY`",
-    "1-1": "Local node does not cache any data and communicates with other 
cache nodes via remote calls.",
-    "2-0": "`NEAR_ONLY`",
-    "2-1": "Local node will not be primary or backup node for any key, but 
will cache recently accessed keys in a smaller near cache. Amount of recently 
accessed keys to cache is controlled by near eviction policy.",
-    "3-0": "`NEAR_PARTITIONED`",
-    "3-1": "Local node may store primary and/or backup keys, and also will 
cache recently accessed keys in near cache. Amount of recently accessed keys to 
cache is controlled by near eviction policy."
-  },
-  "cols": 2,
-  "rows": 4
-}
-[/block]
-By default `PARTITIONED_ONLY` cache distribution mode is enabled. It can be 
turned on or off by setting the `distributionMode` configuration property in 
`CacheConfiguration`. For example:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          \t<!-- 
Set a cache name. -->\n           \t<property name=\"name\" 
value=\"cacheName\"/>\n            \n          \t<!-- cache distribution mode. 
-->\n    \t\t\t\t<property name=\"distributionMode\" value=\"NEAR_ONLY\"/>\n    
\t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setDistributionMode(CacheDistributionMode.NEAR_ONLY);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Atomic Write Order Mode"
-}
-[/block]
-When using partitioned cache in `CacheAtomicityMode.ATOMIC` mode, one can 
configure atomic cache write order mode. Atomic write order mode determines 
which node will assign write version (sender or primary node) and is defined by 
`CacheAtomicWriteOrderMode` enumeration. There are 2 modes, `CLOCK` and 
`PRIMARY`. 
-
-In `CLOCK` write order mode, write versions are assigned on a sender node. 
`CLOCK` mode is automatically turned on only when 
`CacheWriteSynchronizationMode.FULL_SYNC` is used, as it  generally leads to 
better performance since write requests to primary and backups nodes are sent 
at the same time. 
-
-In `PRIMARY` write order mode, cache version is assigned only on primary node. 
In this mode the sender will only send write requests to primary nodes, which 
in turn will assign write version and forward them to backups.
-
-Atomic write order mode can be configured via `atomicWriteOrderMode` property 
of `CacheConfiguration`. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n           
\t<!-- Set a cache name. -->\n           \t<property name=\"name\" 
value=\"cacheName\"/>\n          \t\n          \t<!-- Atomic write order mode. 
-->\n    \t\t\t\t<property name=\"atomicWriteOrderMode\" value=\"PRIMARY\"/>\n  
  \t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setAtomicWriteOrderMode(CacheAtomicWriteOrderMode.CLOCK);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "body": "For more information on `ATOMIC` mode, refer to 
[Transactions](/docs/transactions) section."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Primary and Backup Nodes"
-}
-[/block]
-In `PARTITIONED` mode, nodes to which the keys are assigned to are called 
primary nodes for those keys. You can also optionally configure any number of 
backup nodes for cached data. If the number of backups is greater than 0, then 
Ignite will automatically assign backup nodes for each individual key. For 
example if the number of backups is 1, then every key cached in the data grid 
will have 2 copies, 1 primary and 1 backup.
-[block:callout]
-{
-  "type": "info",
-  "body": "By default, backups are turned off for better performance."
-}
-[/block]
-Backups can be configured by setting `backups()` property of 
'CacheConfiguration`, like so:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n           
\t<!-- Set a cache name. -->\n           \t<property name=\"name\" 
value=\"cacheName\"/>\n          \n          \t<!-- Set cache mode. -->\n    
\t\t\t\t<property name=\"cacheMode\" value=\"PARTITIONED\"/>\n          \t\n    
      \t<!-- Number of backup nodes. -->\n    \t\t\t\t<property 
name=\"backups\" value=\"1\"/>\n    \t\t\t\t... \n        </bean\n    
</property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setCacheMode(CacheMode.PARTITIONED);\n\ncacheCfg.setBackups(1);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Near Caches"
-}
-[/block]
-A partitioned cache can also be fronted by a `Near` cache, which is a smaller 
local cache that stores most recently or most frequently accessed data. Just 
like with a partitioned cache, the user can control the size of the near cache 
and its eviction policies. 
-
-In the vast majority of use cases, whenever utilizing Ignite with affinity 
colocation, near caches should not be used. If computations are collocated with 
the proper partition cache nodes then the near cache is simply not needed 
because all the data is available locally in the partitioned cache.
-
-However, there are cases when it is simply impossible to send computations to 
remote nodes. For cases like this near caches can significantly improve 
scalability and the overall performance of the application.
-
-Following are configuration parameters related to near cache. These parameters 
make sense for `PARTITIONED` cache only.
-[block:parameters]
-{
-  "data": {
-    "0-0": "`setNearEvictionPolicy(CacheEvictionPolicy)`",
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-1": "Eviction policy for near cache.",
-    "0-2": "`CacheLruEvictionPolicy` with max size of 10,000.",
-    "1-0": "`setEvictNearSynchronized(boolean)`",
-    "1-1": "Flag indicating whether eviction is synchronized with near caches 
on remote nodes.",
-    "1-2": "true",
-    "2-0": "`setNearStartSize(int)`",
-    "3-0": "",
-    "2-2": "256",
-    "2-1": "Start size for near cache."
-  },
-  "cols": 3,
-  "rows": 3
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          \t<!-- 
Set a cache name. -->\n           \t<property name=\"name\" 
value=\"cacheName\"/>\n          \n           \t<!-- Start size for near cache. 
-->\n    \t\t\t\t<property name=\"nearStartSize\" value=\"512\"/>\n \n          
  <!-- Configure LRU eviction policy for near cache. -->\n            <property 
name=\"nearEvictionPolicy\">\n                <bean 
class=\"org.apache.ignite.cache.eviction.lru.CacheLruEvictionPolicy\">\n        
            <!-- Set max size to 1000. -->\n                    <property 
name=\"maxSize\" value=\"1000\"/>\n                </bean>\n            
</property>\n    \t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setNearStartSize(512);\n\nCacheLruEvictionPolicy
 evctPolicy = new 
CacheLruEvictionPolicy();\nevctPolicy.setMaxSize(1000);\n\ncacheCfg.setNearEvictionPolicy(evctPolicy);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Configuration"
-}
-[/block]
-
-Cache modes are configured for each cache by setting the `cacheMode` property 
of `CacheConfiguration` like so:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n           
\t<!-- Set a cache name. -->\n           \t<property name=\"name\" 
value=\"cacheName\"/>\n            \n          \t<!-- Set cache mode. -->\n    
\t\t\t\t<property name=\"cacheMode\" value=\"PARTITIONED\"/>\n    \t\t\t\t... 
\n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setCacheMode(CacheMode.PARTITIONED);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/data-grid/cache-queries.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/cache-queries.md 
b/wiki/documentation/data-grid/cache-queries.md
deleted file mode 100644
index 9df0c9e..0000000
--- a/wiki/documentation/data-grid/cache-queries.md
+++ /dev/null
@@ -1,181 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite supports a very elegant query API with support for
-
-  * [Predicate-based Scan Queries](#scan-queries)
-  * [SQL Queries](#sql-queries)
-  * [Text Queries](#text-queries)
-  * [Continuous Queries](#continuous-queries)
-  
-For SQL queries ignites supports in-memory indexing, so all the data lookups 
are extremely fast. If you are caching your data in [off-heap 
memory](doc:off-heap-memory), then query indexes will also be cached in 
off-heap memory as well.
-
-Ignite also provides support for custom indexing via `IndexingSpi` and 
`SpiQuery` class.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Main Abstractions"
-}
-[/block]
-`IgniteCache` has several query methods all of which receive some sublcass of 
`Query` class and return `QueryCursor`.
-##Query
-`Query` abstract class represents an abstract paginated query to be executed 
on the distributed cache. You can set the page size for the returned cursor via 
`Query.setPageSize(...)` method (default is `1024`).
-
-##QueryCursor
-`QueryCursor` represents query result set and allows for transparent 
page-by-page iteration. Whenever user starts iterating over the last page, it 
will automatically request the next page in the background. For cases when 
pagination is not needed, you can use `QueryCursor.getAll()` method which will 
fetch the whole query result and store it in a collection.
-[block:callout]
-{
-  "type": "info",
-  "title": "Closing Cursors",
-  "body": "Cursors will close automatically if you iterate to the end of the 
result set. If you need to stop iteration sooner, you must close() the cursor 
explicitly or use `AutoCloseable` syntax."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Scan Queries"
-}
-[/block]
-Scan queries allow for querying cache in distributed form based on some user 
defined predicate. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\n// Find only persons earning more than 
1,000.\ntry (QueryCursor cursor = cache.query(new ScanQuery((k, p) -> 
p.getSalary() > 1000)) {\n  for (Person p : cursor)\n    
System.out.println(p.toString());\n}",
-      "language": "java",
-      "name": "scan"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\n// Find only persons earning more than 
1,000.\nIgniteBiPredicate<Long, Person> filter = new IgniteByPredicate<>() {\n  
@Override public boolean apply(Long key, Perons p) {\n  \treturn p.getSalary() 
> 1000;\n\t}\n};\n\ntry (QueryCursor cursor = cache.query(new 
ScanQuery(filter)) {\n  for (Person p : cursor)\n    
System.out.println(p.toString());\n}",
-      "language": "java",
-      "name": "java7 scan"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "SQL Queries"
-}
-[/block]
-Ignite supports free-form SQL queries virtually without any limitations. SQL 
syntax is ANSI-99 compliant. You can use any SQL function, any aggregation, any 
grouping and Ignite will figure out where to fetch the results from.
-
-##SQL Joins
-Ignite supports distributed SQL joins. Moreover, if data resides in different 
caches, Ignite allows for cross-cache joins as well. 
-
-Joins between `PARTITIONED` and `REPLICATED` caches always work without any 
limitations. However, if you do a join between two `PARTITIONED` data sets, 
then you must make sure that the keys you are joining on are **collocated**. 
-
-##Field Queries
-Instead of selecting the whole object, you can choose to select only specific 
fields in order to minimize network and serialization overhead. For this 
purpose Ignite has a concept of `fields queries`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\nSqlQuery sql = new SqlQuery(Person.class, 
\"salary > ?\");\n\n// Find only persons earning more than 1,000.\ntry 
(QueryCursor<Entry<Long, Person> cursor = cache.query(sql.setArgs(1000)) {\n  
for (Entry<Long, Person> e : cursor)\n    
System.out.println(e.getValue().toString());\n}",
-      "language": "java",
-      "name": "sql"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\n// SQL join on Person and 
Organization.\nSqlQuery sql = new SqlQuery(Person.class,\n  \"from Person, 
Organization \"\n  + \"where Person.orgId = Organization.id \"\n  + \"and 
lower(Organization.name) = lower(?)\");\n\n// Find all persons working for 
Ignite organization.\ntry (QueryCursor<Entry<Long, Person> cursor = 
cache.query(sql.setArgs(\"Ignite\")) {\n  for (Entry<Long, Person> e : 
cursor)\n    System.out.println(e.getValue().toString());\n}",
-      "language": "java",
-      "name": "sql join"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\nSqlFieldsQuery sql = new SqlFieldsQuery(\"select 
concat(firstName, ' ', lastName) from Person\");\n\n// Select concatinated 
first and last name for all persons.\ntry (QueryCursor<List<?>> cursor = 
cache.query(sql) {\n  for (List<?> row : cursor)\n    System.out.println(\"Full 
name: \" + row.get(0));\n}",
-      "language": "java",
-      "name": "sql fields"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\n// Select with join between Person and 
Organization.\nSqlFieldsQuery sql = new SqlFieldsQuery(\n  \"select 
concat(firstName, ' ', lastName), Organization.name \"\n  + \"from Person, 
Organization where \"\n  + \"Person.orgId = Organization.id and \"\n  + 
\"Person.salary > ?\");\n\n// Only find persons with salary > 1000.\ntry 
(QueryCursor<List<?>> cursor = cache.query(sql.setArgs(1000)) {\n  for (List<?> 
row : cursor)\n    System.out.println(\"personName=\" + row.get(0) + \", 
orgName=\" + row.get(1));\n}",
-      "language": "java",
-      "name": "sql fields & join"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Text Queries"
-}
-[/block]
-Ignite also supports text-based queries based on Lucene indexing.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\n// Query for all people with \"Master Degree\" 
in their resumes.\nTextQuery txt = new TextQuery(Person.class, \"Master 
Degree\");\n\ntry (QueryCursor<Entry<Long, Person>> masters = cache.query(txt)) 
{\n  for (Entry<Long, Person> e : cursor)\n    
System.out.println(e.getValue().toString());\n}",
-      "language": "java",
-      "name": "text query"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Continuous Queries"
-}
-[/block]
-Continuous queries are good for cases when you want to execute a query and 
then continue to get notified about the data changes that fall into your query 
filter.
-
-Continuous queries are supported via `ContinuousQuery` class, which supports 
the following:
-## Initial Query
-Whenever executing continuous query, you have an option to execution initial 
query before starting to listen to updates. The initial query can be set via 
`ContinuousQuery.setInitialQuery(Query)` method and can be of any query type, 
[Scan](#scan-queries), [SQL](#sql-queries), or [TEXT](#text-queries). This 
parameter is optional, and if not set, will not be used.
-## Remote Filter
-This filter is executed on the primary node for a given key and evaluates 
whether the event should be propagated to the listener. If the filter returns 
`true`, then the listener will be notified, otherwise the event will be 
skipped. Filtering events on the node on which they have occurred allows to 
minimize unnecessary network traffic for listener notifications. Remote filter 
can be set via `ContinuousQuery.setRemoteFilter(CacheEntryEventFilter<K, V>)` 
method.
-## Local Listener
-Whenever events pass the remote filter, they will be send to the client to 
notify the local listener there. Local listener is set via 
`ContinuousQuery.setLocalListener(CacheEntryUpdatedListener<K, V>)` method.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Integer, String> cache = 
ignite.jcache(\"mycache\");\n\n// Create new continuous 
query.\nContinuousQuery<Integer, String> qry = new ContinuousQuery<>();\n\n// 
Optional initial query to select all keys greater than 
10.\nqry.setInitialQuery(new ScanQuery<Integer, String>((k, v) -> k > 
10)):\n\n// Callback that is called locally when update notifications are 
received.\nqry.setLocalListener((evts) -> \n\tevts.stream().forEach(e -> 
System.out.println(\"key=\" + e.getKey() + \", val=\" + e.getValue())));\n\n// 
This filter will be evaluated remotely on all nodes.\n// Entry that pass this 
filter will be sent to the caller.\nqry.setRemoteFilter(e -> e.getKey() > 
10);\n\n// Execute query.\ntry (QueryCursor<Cache.Entry<Integer, String>> cur = 
cache.query(qry)) {\n  // Iterate through existing data stored in cache.\n  for 
(Cache.Entry<Integer, String> e : cur)\n    System.out.println(\"key=\" + 
e.getKey() + \", val=\" + e.getValue());\n\n  // Add a few more keys and w
 atch a few more query notifications.\n  for (int i = 5; i < 15; i++)\n    
cache.put(i, Integer.toString(i));\n}\n",
-      "language": "java",
-      "name": "continuous query"
-    },
-    {
-      "code": "IgniteCache<Integer, String> cache = 
ignite.jcache(CACHE_NAME);\n\n// Create new continuous 
query.\nContinuousQuery<Integer, String> qry = new 
ContinuousQuery<>();\n\nqry.setInitialQuery(new ScanQuery<Integer, String>(new 
IgniteBiPredicate<Integer, String>() {\n  @Override public boolean 
apply(Integer key, String val) {\n    return key > 10;\n  }\n}));\n\n// 
Callback that is called locally when update notifications are 
received.\nqry.setLocalListener(new CacheEntryUpdatedListener<Integer, 
String>() {\n  @Override public void onUpdated(Iterable<CacheEntryEvent<? 
extends Integer, ? extends String>> evts) {\n    for (CacheEntryEvent<Integer, 
String> e : evts)\n      System.out.println(\"key=\" + e.getKey() + \", val=\" 
+ e.getValue());\n  }\n});\n\n// This filter will be evaluated remotely on all 
nodes.\n// Entry that pass this filter will be sent to the 
caller.\nqry.setRemoteFilter(new CacheEntryEventFilter<Integer, String>() {\n  
@Override public boolean evaluate(Cache
 EntryEvent<? extends Integer, ? extends String> e) {\n    return e.getKey() > 
10;\n  }\n});\n\n// Execute query.\ntry (QueryCursor<Cache.Entry<Integer, 
String>> cur = cache.query(qry)) {\n  // Iterate through existing data.\n  for 
(Cache.Entry<Integer, String> e : cur)\n    System.out.println(\"key=\" + 
e.getKey() + \", val=\" + e.getValue());\n\n  // Add a few more keys and watch 
more query notifications.\n  for (int i = keyCnt; i < keyCnt + 10; i++)\n    
cache.put(i, Integer.toString(i));\n}",
-      "language": "java",
-      "name": "java7 continuous query"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Query Configuration"
-}
-[/block]
-Queries can be configured from code by using `@QuerySqlField` annotations.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class Person implements Serializable {\n  /** Person ID 
(indexed). */\n  @QuerySqlField(index = true)\n  private long id;\n\n  /** 
Organization ID (indexed). */\n  @QuerySqlField(index = true)\n  private long 
orgId;\n\n  /** First name (not-indexed). */\n  @QuerySqlField\n  private 
String firstName;\n\n  /** Last name (not indexed). */\n  @QuerySqlField\n  
private String lastName;\n\n  /** Resume text (create LUCENE-based TEXT index 
for this field). */\n  @QueryTextField\n  private String resume;\n\n  /** 
Salary (indexed). */\n  @QuerySqlField(index = true)\n  private double 
salary;\n  \n  ...\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/data-grid/data-grid.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/data-grid.md 
b/wiki/documentation/data-grid/data-grid.md
deleted file mode 100644
index eed906c..0000000
--- a/wiki/documentation/data-grid/data-grid.md
+++ /dev/null
@@ -1,85 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite in-memory data grid has been built from the ground up with a notion of 
horizontal scale and ability to add nodes on demand in real-time; it has been 
designed to linearly scale to hundreds of nodes with strong semantics for data 
locality and affinity data routing to reduce redundant data noise.
-
-Ignite data grid supports local, replicated, and partitioned data sets and 
allows to freely cross query between these data sets using standard SQL syntax. 
Ignite supports standard SQL for querying in-memory data including support for 
distributed SQL joins. 
-
-Ignite data grid is lightning fast and is one of the fastest implementations 
of transactional or atomic data in a  cluster today.
-[block:callout]
-{
-  "type": "success",
-  "title": "Data Consistency",
-  "body": "As long as your cluster is alive, Ignite will guarantee that the 
data between different cluster nodes will always remain consistent regardless 
of crashes or topology changes."
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "title": "JCache (JSR 107)",
-  "body": "Ignite Data Grid implements [JCache](doc:jcache) (JSR 107) 
specification (currently undergoing JSR 107 TCK testing)"
-}
-[/block]
-
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/ZBWQwPXbQmyq6RRUyWfm";,
-        "in-memory-data-grid-1.jpg",
-        "500",
-        "338",
-        "#e8893c",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-##Features
-  * Distributed In-Memory Caching
-  * Lightning Fast Performance
-  * Elastic Scalability
-  * Distributed In-Memory Transactions
-  * Web Session Clustering
-  * Hibernate L2 Cache Integration
-  * Tiered Off-Heap Storage
-  * Distributed SQL Queries with support for Joins
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCache"
-}
-[/block]
-`IgniteCache` interface is a gateway into Ignite cache implementation and 
provides methods for storing and retrieving data, executing queries, including 
SQL, iterating and scanning, etc.
-
-##JCache
-`IgniteCache` interface extends `javax.cache.Cache` interface from JCache 
specification and adds additional functionality to it, mainly having to do with 
local vs. distributed operations, queries, metrics, etc.
-
-You can obtain an instance of `IgniteCache` as follows:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Obtain instance of 
cache named \"myCache\".\n// Note that different caches may have different 
generics.\nIgniteCache<Integer, String> cache = ignite.jcache(\"myCache\");",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/data-grid/data-loading.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/data-loading.md 
b/wiki/documentation/data-grid/data-loading.md
deleted file mode 100644
index 04bf661..0000000
--- a/wiki/documentation/data-grid/data-loading.md
+++ /dev/null
@@ -1,94 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Data loading usually has to do with initializing cache data on startup. Using 
standard cache `put(...)` or `putAll(...)` operations is generally inefficient 
for loading large amounts of data. 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteDataLoader"
-}
-[/block]
-For fast loading of large amounts of data Ignite provides a utility interface, 
`IgniteDataLoader`, which internally will properly batch keys together and 
collocate those batches with nodes on which the data will be cached. 
-
-The high loading speed is achieved with the following techniques:
-  * Entries that are mapped to the same cluster member will be batched 
together in a buffer.
-  * Multiple buffers can coexist at the same time.
-  * To avoid running out of memory, data loader has a maximum number of 
buffers it can process concurrently.
-
-To add data to the data loader, you should call 
`IgniteDataLoader.addData(...)` method.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Get the data loader reference and load data.\ntry 
(IgniteDataLoader<Integer, String> ldr = ignite.dataLoader(\"myCache\")) {    
\n    // Load entries.\n    for (int i = 0; i < 100000; i++)\n        
ldr.addData(i, Integer.toString(i));\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-## Allow Overwrite
-By default, the data loader will only support initial data loading, which 
means that if it will encounter an entry that is already in cache, it will skip 
it. This is the most efficient and performant mode, as the data loader does not 
have to worry about data versioning in the background.
-
-If you anticipate that the data may already be in the cache and you need to 
overwrite it, you should set `IgniteDataLoader.allowOverwrite(true)` parameter.
-
-## Using Updater
-For cases when you need to execute some custom logic instead of just adding 
new data, you can take advantage of `IgniteDataLoader.Updater` API. 
-
-In the example below, we  generate random numbers and store them as key. The 
number of times the same number is generated is stored as value. The `Updater` 
helps to increment the value by 1 each time we try to load that same key into 
the cache.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Closure that increments passed in value.\nfinal 
GridClosure<Long, Long> INC = new GridClosure<Long, Long>() {\n    @Override 
public Long apply(Long e) {\n        return e == null ? 1L : e + 1;\n    
}\n};\n\n// Get the data loader reference and load data.\ntry 
(GridDataLoader<Integer, String> ldr = grid.dataLoader(\"myCache\")) {   \n    
// Configure the updater.\n    ldr.updater((cache, entries) -> {\n      for 
(Map.Entry<Integer, Long> e : entries)\n        cache.invoke(e.getKey(), 
(entry, args) -> {\n          Integer val = entry.getValue();\n\n          
entry.setValue(val == null ? 1 : val + 1);\n\n          return null;\n        
});\n    });\n \n    for (int i = 0; i < CNT; i++)\n        
ldr.addData(RAND.nextInt(100), 1L);\n}",
-      "language": "java",
-      "name": "updater"
-    },
-    {
-      "code": "// Closure that increments passed in value.\nfinal 
GridClosure<Long, Long> INC = new GridClosure<Long, Long>() {\n    @Override 
public Long apply(Long e) {\n        return e == null ? 1L : e + 1;\n    
}\n};\n\n// Get the data loader reference and load data.\ntry 
(GridDataLoader<Integer, String> ldr = grid.dataLoader(\"myCache\")) {   \n    
// Configure updater.\n    ldr.updater(new GridDataLoadCacheUpdater<Integer, 
Long>() {\n        @Override public void update(GridCache<Integer, Long> 
cache,\n            Collection<Map.Entry<Integer, Long>> entries) throws 
GridException {\n                for (Map.Entry<Integer, Long> e : entries)\n   
                 cache.transform(e.getKey(), INC);\n        }\n    });\n \n    
for (int i = 0; i < CNT; i++)\n        ldr.addData(RAND.nextInt(100), 1L);\n}",
-      "language": "java",
-      "name": "java7 updater"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCache.loadCache()"
-}
-[/block]
-Another way to load large amounts of data into cache is through 
[CacheStore.loadCache()](docs/persistent-store#loadcache-) method, which allows 
for cache data loading even without passing all the keys that need to be 
loaded. 
-
-`IgniteCache.loadCache()` method will delegate to `CacheStore.loadCache()` 
method on every cluster member that is running the cache. To invoke loading 
only on the local cluster node, use `IgniteCache.localLoadCache()` method.
-[block:callout]
-{
-  "type": "info",
-  "body": "In case of partitioned caches, keys that are not mapped to this 
node, either as primary or backups, will be automatically discarded by the 
cache."
-}
-[/block]
-Here is an example of how `CacheStore.loadCache()` implementation. For a 
complete example of how a `CacheStore` can be implemented refer to [Persistent 
Store](doc:persistent-store).
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class CacheJdbcPersonStore extends 
CacheStoreAdapter<Long, Person> {\n\t...\n  // This mehtod is called whenever 
\"IgniteCache.loadCache()\" or\n  // \"IgniteCache.localLoadCache()\" methods 
are called.\n  @Override public void loadCache(IgniteBiInClosure<Long, Person> 
clo, Object... args) {\n    if (args == null || args.length == 0 || args[0] == 
null)\n      throw new CacheLoaderException(\"Expected entry count parameter is 
not provided.\");\n\n    final int entryCnt = (Integer)args[0];\n\n    
Connection conn = null;\n\n    try (Connection conn = connection()) {\n      
try (PreparedStatement st = conn.prepareStatement(\"select * from PERSONS\")) 
{\n        try (ResultSet rs = st.executeQuery()) {\n          int cnt = 0;\n\n 
         while (cnt < entryCnt && rs.next()) {\n            Person person = new 
Person(rs.getLong(1), rs.getString(2), rs.getString(3));\n\n            
clo.apply(person.getId(), person);\n\n            cnt++;\n          }\n        
}\n      
 }\n    }\n    catch (SQLException e) {\n      throw new 
CacheLoaderException(\"Failed to load values from cache store.\", e);\n    }\n  
}\n  ...\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/data-grid/evictions.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/evictions.md 
b/wiki/documentation/data-grid/evictions.md
deleted file mode 100644
index bbf28ae..0000000
--- a/wiki/documentation/data-grid/evictions.md
+++ /dev/null
@@ -1,103 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Eviction policies control the maximum number of elements that can be stored in 
a cache on-heap memory.  Whenever maximum on-heap cache size is reached, 
entries are evicted into [off-heap space](doc:off-heap-memory), if one is 
enabled. 
-
-In Ignite eviction policies are pluggable and are controlled via 
`CacheEvictionPolicy` interface. An implementation of eviction policy is 
notified of every cache change and defines the algorithm of choosing the 
entries to evict from cache. 
-[block:callout]
-{
-  "type": "info",
-  "body": "If your data set can fit in memory, then eviction policy will not 
provide any benefit and should be disabled, which is the default behavior."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Least Recently Used (LRU)"
-}
-[/block]
-LRU eviction policy is based on [Least Recently Used 
(LRU)](http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used) 
algorithm, which ensures that the least recently used entry (i.e. the entry 
that has not been touched the longest) gets evicted first. 
-[block:callout]
-{
-  "type": "success",
-  "body": "LRU eviction policy nicely fits most of the use cases for caching. 
Use it whenever in doubt."
-}
-[/block]
-This eviction policy is implemented by `CacheLruEvictionPolicy` and can be 
configured via `CacheConfiguration`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.cache.CacheConfiguration\">\n  
<property name=\"name\" value=\"myCache\"/>\n    ...\n    <property 
name=\"evictionPolicy\">\n        <!-- LRU eviction policy. -->\n        <bean 
class=\"org.apache.ignite.cache.eviction.lru.CacheLruEvictionPolicy\">\n        
    <!-- Set the maximum cache size to 1 million (default is 100,000). -->\n    
        <property name=\"maxSize\" value=\"1000000\"/>\n        </bean>\n    
</property>\n    ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\n// Set the maximum 
cache size to 1 million (default is 100,000).\ncacheCfg.setEvictionPolicy(new 
CacheLruEvictionPolicy(1000000);\n\nIgniteConfiguration cfg = new 
IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start 
Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "First In First Out (FIFO)"
-}
-[/block]
-FIFO eviction policy is based on [First-In-First-Out 
(FIFO)](https://en.wikipedia.org/wiki/FIFO) algorithm which ensures that entry 
that has been in cache the longest will be evicted first. It is different from 
`CacheLruEvictionPolicy` because it ignores the access order of entries. 
-
-This eviction policy is implemented by `CacheFifoEvictionPolicy` and can be 
configured via `CacheConfiguration`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.cache.CacheConfiguration\">\n  
<property name=\"name\" value=\"myCache\"/>\n    ...\n    <property 
name=\"evictionPolicy\">\n        <!-- FIFO eviction policy. -->\n        <bean 
class=\"org.apache.ignite.cache.eviction.fifo.CacheFifoEvictionPolicy\">\n      
      <!-- Set the maximum cache size to 1 million (default is 100,000). -->\n  
          <property name=\"maxSize\" value=\"1000000\"/>\n        </bean>\n    
</property>\n    ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\n// Set the maximum 
cache size to 1 million (default is 100,000).\ncacheCfg.setEvictionPolicy(new 
CacheFifoEvictionPolicy(1000000);\n\nIgniteConfiguration cfg = new 
IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start 
Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Random"
-}
-[/block]
-Random eviction policy which randomly chooses entries to evict. This eviction 
policy is mainly used for debugging and benchmarking purposes.
-
-This eviction policy is implemented by `CacheRandomEvictionPolicy` and can be 
configured via `CacheConfiguration`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.cache.CacheConfiguration\">\n  
<property name=\"name\" value=\"myCache\"/>\n    ...\n    <property 
name=\"evictionPolicy\">\n        <!-- Random eviction policy. -->\n        
<bean 
class=\"org.apache.ignite.cache.eviction.random.CacheRandomEvictionPolicy\">\n  
          <!-- Set the maximum cache size to 1 million (default is 100,000). 
-->\n            <property name=\"maxSize\" value=\"1000000\"/>\n        
</bean>\n    </property>\n    ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\n// Set the maximum 
cache size to 1 million (default is 100,000).\ncacheCfg.setEvictionPolicy(new 
CacheRandomEvictionPolicy(1000000);\n\nIgniteConfiguration cfg = new 
IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start 
Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/data-grid/hibernate-l2-cache.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/hibernate-l2-cache.md 
b/wiki/documentation/data-grid/hibernate-l2-cache.md
deleted file mode 100644
index 5567845..0000000
--- a/wiki/documentation/data-grid/hibernate-l2-cache.md
+++ /dev/null
@@ -1,190 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite In-Memory Data Fabric can be used as [Hibernate](http://hibernate.org) 
Second-Level cache (or L2 cache), which can significantly speed-up the 
persistence layer of your application.
-
-[Hibernate](http://hibernate.org) is a well-known and widely used framework 
for Object-Relational Mapping (ORM). While interacting closely with an SQL 
database, it performs caching of retrieved data to minimize expensive database 
requests.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/D35hL3OuQ46YA4v3BLwJ";,
-        "hibernate-L2-cache.png",
-        "600",
-        "478",
-        "#b7917a",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-All work with Hibernate database-mapped objects is done within a session, 
usually bound to a worker thread or a Web session. By default, Hibernate only 
uses per-session (L1) cache, so, objects, cached in one session, are not seen 
in another. However, a global second-level (L2) cache may be used, in which the 
cached objects are seen for all sessions that use the same L2 cache 
configuration. This usually gives a significantly greater performance gain, 
because each newly-created session can take full advantage of the data already 
present in L2 cache memory (which outlives any session-local L1 cache).
-
-While L1 cache is always enabled and fully implemented by Hibernate 
internally, L2 cache is optional and can have multiple pluggable 
implementaions. Ignite can be easily plugged-in as an L2 cache implementation, 
and can be used in all access modes (`READ_ONLY`, `READ_WRITE`, 
`NONSTRICT_READ_WRITE`, and `TRANSACTIONAL`), supporting a wide range of 
related features:
-  * caching to memory and disk, as well as off-heap memory.
-  * cache transactions, that make `TRANSACTIONA`L mode possible.
-  * clustering, with 2 different replication modes: `REPLICATED` and 
`PARTITIONED`
-
-To start using GridGain as a Hibernate L2 cache, you need to perform 3 simple 
steps:
-  * Add Ignite libraries to your application's classpath.
-  * Enable L2 cache and specify Ignite implementation class in L2 cache 
configuration.
-  * Configure Ignite caches for L2 cache regions and start the embedded Ignite 
node (and, optionally, external Ignite nodes). 
- 
-In the section below we cover these steps in more detail.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "L2 Cache Configuration"
-}
-[/block]
-To configure Ignite In-Memory Data Fabric as a Hibernate L2 cache, without any 
changes required to the existing Hibernate code, you need to:
-  * Configure Hibernate itself to use Ignite as L2 cache.
-  * Configure Ignite cache appropriately. 
-
-##Hibernate Configuration Example
-A typical Hibernate configuration for L2 cache with Ignite would look like the 
one below:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<hibernate-configuration>\n    <session-factory>\n        ...\n 
       <!-- Enable L2 cache. -->\n        <property 
name=\"cache.use_second_level_cache\">true</property>\n        \n        <!-- 
Generate L2 cache statistics. -->\n        <property 
name=\"generate_statistics\">true</property>\n        \n        <!-- Specify 
GridGain as L2 cache provider. -->\n        <property 
name=\"cache.region.factory_class\">org.gridgain.grid.cache.hibernate.GridHibernateRegionFactory</property>\n
        \n        <!-- Specify the name of the grid, that will be used for 
second level caching. -->\n        <property 
name=\"org.gridgain.hibernate.grid_name\">hibernate-grid</property>\n        \n 
       <!-- Set default L2 cache access type. -->\n        <property 
name=\"org.gridgain.hibernate.default_access_type\">READ_ONLY</property>\n      
  \n        <!-- Specify the entity classes for mapping. -->\n        <mapping 
class=\"com.mycompany.MyEntity1\"/>\n        <mapping class=\"com.m
 ycompany.MyEntity2\"/>\n        \n        <!-- Per-class L2 cache settings. 
-->\n        <class-cache class=\"com.mycompany.MyEntity1\" 
usage=\"read-only\"/>\n        <class-cache class=\"com.mycompany.MyEntity2\" 
usage=\"read-only\"/>\n        <collection-cache 
collection=\"com.mycompany.MyEntity1.children\" usage=\"read-only\"/>\n        
...\n    </session-factory>\n</hibernate-configuration>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-Here, we do the following:
-  * Enable L2 cache (and, optionally, the L2 cache statistics generation).
-  * Specify Ignite as L2 cache implementation.
-  * Specify the name of the caching grid (should correspond to the one in 
Ignite configuration).
-  * Specify the entity classes and configure caching for each class (a 
corresponding cache region should be configured in Ignite). 
-
-##Ignite Configuration Example
-A typical Ignite configuration for Hibernate L2 caching looks like this:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<!-- Basic configuration for atomic cache. -->\n<bean 
id=\"atomic-cache\" 
class=\"org.apache.ignite.configutation.CacheConfiguration\" 
abstract=\"true\">\n    <property name=\"cacheMode\" value=\"PARTITIONED\"/>\n  
  <property name=\"atomicityMode\" value=\"ATOMIC\"/>\n    <property 
name=\"writeSynchronizationMode\" value=\"FULL_SYNC\"/>\n</bean>\n \n<!-- Basic 
configuration for transactional cache. -->\n<bean id=\"transactional-cache\" 
class=\"org.apache.ignite.configutation.CacheConfiguration\" 
abstract=\"true\">\n    <property name=\"cacheMode\" value=\"PARTITIONED\"/>\n  
  <property name=\"atomicityMode\" value=\"TRANSACTIONAL\"/>\n    <property 
name=\"writeSynchronizationMode\" value=\"FULL_SYNC\"/>\n</bean>\n \n<bean 
id=\"ignite.cfg\" 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n    <!-- \n     
   Specify the name of the caching grid (should correspond to the \n        one 
in Hibernate configuration).\n    -->\n    <property name=\"gridName\" 
 value=\"hibernate-grid\"/>\n    ...\n    <!-- \n        Specify cache 
configuration for each L2 cache region (which corresponds \n        to a full 
class name or a full association name).\n    -->\n    <property 
name=\"cacheConfiguration\">\n        <list>\n            <!--\n                
Configurations for entity caches.\n            -->\n            <bean 
parent=\"transactional-cache\">\n                <property name=\"name\" 
value=\"com.mycompany.MyEntity1\"/>\n            </bean>\n            <bean 
parent=\"transactional-cache\">\n                <property name=\"name\" 
value=\"com.mycompany.MyEntity2\"/>\n            </bean>\n            <bean 
parent=\"transactional-cache\">\n                <property name=\"name\" 
value=\"com.mycompany.MyEntity1.children\"/>\n            </bean>\n \n          
  <!-- Configuration for update timestamps cache. -->\n            <bean 
parent=\"atomic-cache\">\n                <property name=\"name\" 
value=\"org.hibernate.cache.spi.UpdateTimesta
 mpsCache\"/>\n            </bean>\n \n            <!-- Configuration for query 
result cache. -->\n            <bean parent=\"atomic-cache\">\n                
<property name=\"name\" 
value=\"org.hibernate.cache.internal.StandardQueryCache\"/>\n            
</bean>\n        </list>\n    </property>\n    ...\n</bean>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-Here, we specify the cache configuration for each L2 cache region:
-  * We use `PARTITIONED` cache to split the data between caching nodes. 
Another possible strategy is to enable `REPLICATED` mode, thus replicating a 
full dataset between all caching nodes. See Cache Distribution Models for more 
information.
-  * We specify the cache name that corresponds an L2 cache region name (either 
a full class name or a full association name).
-  * We use `TRANSACTIONAL` atomicity mode to take advantage of cache 
transactions.
-  * We enable `FULL_SYNC` to be always fully synchronized with backup nodes.
-
-Additionally, we specify a cache for update timestamps, which may be `ATOMIC`, 
for better performance.
-
-Having configured Ignite caching node, we can start it from within our code 
the following way:
-[block:code]
-{
-  "codes": [
-    {
-      "code": 
"Ignition.start(\"my-config-folder/my-ignite-configuration.xml\");",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-After the above line is executed, the internal Ignite node is started and is 
ready to cache the data. We can also start additional standalone nodes by 
running the following command from console:
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "$IGNITE_HOME/bin/ignite.sh 
my-config-folder/my-ignite-configuration.xml",
-      "language": "text"
-    }
-  ]
-}
-[/block]
-For Windows, use the `.bat` script in the same folder.
-[block:callout]
-{
-  "type": "success",
-  "body": "The nodes may be started on other hosts as well, forming a 
distributed caching cluster. Be sure to specify the right network settings in 
GridGain configuration file for that."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Query Cache"
-}
-[/block]
-In addition to L2 cache, Hibernate offers a query cache. This cache stores the 
results of queries (either HQL or Criteria) with a given set of parameters, so, 
when you repeat the query with the same parameter set, it hits the cache 
without going to the database. 
-
-Query cache may be useful if you have a number of queries, which may repeat 
with the same parameter values. Like in case of L2 cache, Hibernate relies on a 
3-rd party cache implementation, and Ignite In-Memory Data Fabric can be used 
as such.
-[block:callout]
-{
-  "type": "success",
-  "body": "Consider using support for [SQL-based In-Memory 
Queries](/docs/cache-queries) in Ignite which should perform faster than going 
through Hibernate."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Query Cache Configuration"
-}
-[/block]
-The [configuration](#l2-cache-configuration) information above totally applies 
to query cache, but some additional configuration and code change is required.
-
-##Hibernate Configuration
-To enable query cache in Hibernate, you only need one additional line in 
configuration file:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<!-- Enable query cache. -->\n<property 
name=\"cache.use_query_cache\">true</property>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-Yet, a code modification is required: for each query that you want to cache, 
you should enable `cacheable` flag by calling `setCacheable(true)`:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Session ses = ...;\n \n// Create Criteria query.\nCriteria 
criteria = ses.createCriteria(cls);\n \n// Enable cacheable 
flag.\ncriteria.setCacheable(true);\n \n...",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-After this is done, your query results will be cached.
-
-##Ignite Configuration
-To enable Hibernate query caching in Ignite, you need to specify an additional 
cache configuration:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "\n<property name=\"cacheConfiguration\">\n    <list>\n        
...\n        <!-- Query cache (refers to atomic cache defined in above 
example). -->\n        <bean parent=\"atomic-cache\">\n            <property 
name=\"name\" value=\"org.hibernate.cache.internal.StandardQueryCache\"/>\n     
   </bean>\n    </list>\n</property>",
-      "language": "xml"
-    }
-  ]
-}
-[/block]
-Notice that the cache is made `ATOMIC` for better performance.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/data-grid/jcache.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/jcache.md 
b/wiki/documentation/data-grid/jcache.md
deleted file mode 100644
index c2ba448..0000000
--- a/wiki/documentation/data-grid/jcache.md
+++ /dev/null
@@ -1,116 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Apache Ignite data grid is an implementation of JCache (JSR 107) specification 
(currently undergoing JSR107 TCK testing). JCache provides a very simple to 
use, but yet very powerful API for data access. However, the specification 
purposely omits any details about data distribution and consistency to allow 
vendors enough freedom in their own implementations. 
-
-In addition to JCache, Ignite provides ACID transactions, data querying 
capabilities (including SQL), various memory models, queries, transactions, 
etc...
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCache"
-}
-[/block]
-`IgniteCache` is based on **JCache (JSR 107)**, so at the very basic level the 
APIs can be reduced to `javax.cache.Cache` interface. However, `IgniteCache` 
API also provides functionality that facilitates features outside of JCache 
spec, like data loading, querying, asynchronous mode, etc.
-
-You can get an instance of `IgniteCache` directly from `Ignite`:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\nIgniteCache cache = 
ignite.jcache(\"mycache\");",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Basic Operations"
-}
-[/block]
-Here are some basic JCache atomic operation examples.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "try (Ignite ignite = 
Ignition.start(\"examples/config/example-cache.xml\")) {\n    
IgniteCache<Integer, String> cache = ignite.cache(CACHE_NAME);\n \n    // Store 
keys in cache (values will end up on different cache nodes).\n    for (int i = 
0; i < 10; i++)\n        cache.put(i, Integer.toString(i));\n \n    for (int i 
= 0; i < 10; i++)\n        System.out.println(\"Got [key=\" + i + \", val=\" + 
cache.get(i) + ']');\n}",
-      "language": "java",
-      "name": "Put & Get"
-    },
-    {
-      "code": "// Put-if-absent which returns previous value.\nInteger oldVal 
= cache.getAndPutIfAbsent(\"Hello\", 11);\n  \n// Put-if-absent which returns 
boolean success flag.\nboolean success = cache.putIfAbsent(\"World\", 22);\n  
\n// Replace-if-exists operation (opposite of getAndPutIfAbsent), returns 
previous value.\noldVal = cache.getAndReplace(\"Hello\", 11);\n \n// 
Replace-if-exists operation (opposite of putIfAbsent), returns boolean success 
flag.\nsuccess = cache.replace(\"World\", 22);\n  \n// Replace-if-matches 
operation.\nsuccess = cache.replace(\"World\", 2, 22);\n  \n// 
Remove-if-matches operation.\nsuccess = cache.remove(\"Hello\", 1);",
-      "language": "java",
-      "name": "Atomic"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "EntryProcessor"
-}
-[/block]
-Whenever doing `puts` and `updates` in cache, you are usually sending full 
state object state across the network. `EntryProcessor` allows for processing 
data directly on primary nodes, often transferring only the deltas instead of 
the full state. 
-
-Moreover, you can embed your own logic into `EntryProcessors`, for example, 
taking previous cached value and incrementing it by 1.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<String, Integer> cache = 
ignite.jcache(\"mycache\");\n\n// Increment cache value 10 times.\nfor (int i = 
0; i < 10; i++)\n  cache.invoke(\"mykey\", (entry, args) -> {\n    Integer val 
= entry.getValue();\n\n    entry.setValue(val == null ? 1 : val + 1);\n\n    
return null;\n  });",
-      "language": "java",
-      "name": "invoke"
-    },
-    {
-      "code": "IgniteCache<String, Integer> cache = 
ignite.jcache(\"mycache\");\n\n// Increment cache value 10 times.\nfor (int i = 
0; i < 10; i++)\n  cache.invoke(\"mykey\", new EntryProcessor<String, Integer, 
Void>() {\n    @Override \n    public Object process(MutableEntry<Integer, 
String> entry, Object... args) {\n      Integer val = entry.getValue();\n\n     
 entry.setValue(val == null ? 1 : val + 1);\n\n      return null;\n    }\n  
});",
-      "language": "java",
-      "name": "java7 invoke"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "title": "Atomicity",
-  "body": "`EntryProcessors` are executed atomically within a lock on the 
given cache key."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Asynchronous Support"
-}
-[/block]
-Just like all distributed APIs in Ignite, `IgniteCache` extends 
[IgniteAsynchronousSupport](doc:async-support) interface and can be used in 
asynchronous mode.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Enable asynchronous mode.\nIgniteCache<String, Integer> 
asyncCache = ignite.jcache(\"mycache\").withAsync();\n\n// Asynhronously store 
value in cache.\nasyncCache.getAndPut(\"1\", 1);\n\n// Get future for the above 
invocation.\nIgniteFuture<Integer> fut = asyncCache.future();\n\n// 
Asynchronously listen for the operation to complete.\nfut.listenAsync(f -> 
System.out.println(\"Previous cache value: \" + f.get()));",
-      "language": "java",
-      "name": "Async"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/data-grid/off-heap-memory.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/off-heap-memory.md 
b/wiki/documentation/data-grid/off-heap-memory.md
deleted file mode 100644
index d184dd8..0000000
--- a/wiki/documentation/data-grid/off-heap-memory.md
+++ /dev/null
@@ -1,197 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Off-Heap memory allows your cache to overcome lengthy JVM Garbage Collection 
(GC) pauses when working with large heap sizes by caching data outside of main 
Java Heap space, but still in RAM.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/iXCEc4RsQ4a1SM1Vfjnl";,
-        "off-heap-memory.png",
-        "450",
-        "354",
-        "#6c521f",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "title": "Off-Heap Indexes",
-  "body": "Note that when off-heap memory is configured, Ignite will also 
store query indexes off-heap as well. This means that indexes will not take any 
portion of on-heap memory."
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "body": "You can also manage GC pauses by starting multiple processes with 
smaller heap on the same physical server. However, such approach is wasteful 
when using REPLICATED caches as we will end up with caching identical 
*replicated* data for every started JVM process.",
-  "title": "Off-Heap Memory vs. Multiple Processes"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Tiered Off-Heap Storage"
-}
-[/block]
-Ignite provides tiered storage model, where data can be stored and moved 
between **on-heap**, **off-heap**, and **swap space**. Going up the tier 
provides more data storage capacity, with gradual increase in latency. 
-
-Ignite provides three types of memory modes, defined in `CacheMemoryMode`, for 
storing cache entries, supporting tiered storage model:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Memory Mode",
-    "h-1": "Description",
-    "0-0": "`ONHEAP_TIERED`",
-    "0-1": "Store entries on-heap and evict to off-heap and optionally to 
swap.",
-    "1-0": "`OFFHEAP_TIERED`",
-    "2-0": "`OFFHEAP_VALUES`",
-    "1-1": "Store entries off-heap, bypassing on-heap and optionally evicting 
to swap.",
-    "2-1": "Store keys on-heap and values off-heap."
-  },
-  "cols": 2,
-  "rows": 3
-}
-[/block]
-Cache can be configured to use any of the three modes by setting the 
`memoryMode` configuration property of `CacheConfiguration`, as described below.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "ONHEAP_TIERED"
-}
-[/block]
-In Ignite, `ONHEAP_TIERED`  is the default memory mode, where all cache 
entries are stored on-heap. Entries can be moved from on-heap to off-heap 
storage and later to swap space, if one is configured.
-
-To configure `ONHEAP_TIERED` memory mode, you need to:
-
-1. Set `memoryMode` property of `CacheConfiguration` to `ONHEAP_TIERED`. 
-2. Enable off-heap memory (optionally).
-3. Configure *eviction policy* for on-heap memory.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- 
Store cache entries on-heap. -->\n  <property name=\"memoryMode\" 
value=\"ONHEAP_TIERED\"/> \n\n  <!-- Enable Off-Heap memory with max size of 10 
Gigabytes (0 for unlimited). -->\n  <property name=\"offHeapMaxMemory\" 
value=\"#{10 * 1024L * 1024L * 1024L}\"/>\n\n  <!-- Configure eviction policy. 
-->\n  <property name=\"evictionPolicy\">\n    <bean 
class=\"org.apache.ignite.cache.eviction.fifo.CacheFifoEvictionPolicy\">\n      
<!-- Evict to off-heap after cache size reaches maxSize. -->\n      <property 
name=\"maxSize\" value=\"100000\"/>\n    </bean>\n  </property>\n  
...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setMemoryMode(CacheMemoryMode.ONHEAP_TIERED);\n\n//
 Set off-heap memory to 10GB (0 for unlimited)\ncacheCfg.setOffHeapMaxMemory(10 
* 1024L * 1024L * 1024L);\n\nCacheFifoEvictionPolicy evctPolicy = new 
CacheFifoEvictionPolicy();\n\n// Store only 100,000 entries 
on-heap.\nevctPolicy.setMaxSize(100000);\n\ncacheCfg.setEvictionPolicy(evctPolicy);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "warning",
-  "body": "Note that if you do not enable eviction policy in ONHEAP_TIERED 
mode, data will never be moved from on-heap to off-heap memory.",
-  "title": "Eviction Policy"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "OFFHEAP_TIERED"
-}
-[/block]
-This memory mode allows you to configure your cache to store entries directly 
into off-heap storage, bypassing on-heap memory. Since all entries are stored 
off-heap, there is no need to explicitly configure an eviction policy. If 
off-heap storage size is exceeded (0 for unlimited), then LRU eviction policy 
is used to evict entries from off-heap store and optionally moving them to swap 
space, if one is configured.
-
-To configure `OFFHEAP_TIERED` memory mode, you need to:
-
-1. Set `memoryMode` property of `CacheConfiguration` to `OFFHEAP_TIERED`. 
-2. Enable off-heap memory (optionally).
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- 
Always store cache entries in off-heap memory. -->\n  <property 
name=\"memoryMode\" value=\"OFFHEAP_TIERED\"/>\n\n  <!-- Enable Off-Heap memory 
with max size of 10 Gigabytes (0 for unlimited). -->\n  <property 
name=\"offHeapMaxMemory\" value=\"#{10 * 1024L * 1024L * 1024L}\"/>\n  
...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED);\n\n//
 Set off-heap memory to 10GB (0 for unlimited)\ncacheCfg.setOffHeapMaxMemory(10 
* 1024L * 1024L * 1024L);\n\nIgniteConfiguration cfg = new 
IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// Start 
Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "OFFHEAP_VALUES"
-}
-[/block]
-Setting this memory mode allows you to store keys on-heap and values off-heap. 
This memory mode is useful when keys are small and values are large.
-
-To configure `OFFHEAP_VALUES` memory mode, you need to:
-
-1. Set `memoryMode` property of `CacheConfiguration` to `OFFHEAP_VALUES`. 
-2. Enable off-heap memory.
-3. Configure *eviction policy* for on-heap memory (optionally).
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- 
Always store cache entries in off-heap memory. -->\n  <property 
name=\"memoryMode\" value=\"OFFHEAP_VALUES\"/>\n\n  <!-- Enable Off-Heap memory 
with max size of 10 Gigabytes (0 for unlimited). -->\n  <property 
name=\"offHeapMaxMemory\" value=\"#{10 * 1024L * 1024L * 1024L}\"/>\n  
...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setMemoryMode(CacheMemoryMode.OFFHEAP_VALUES);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Swap Space"
-}
-[/block]
-Whenever your data set exceeds the limits of on-heap and off-heap memory, you 
can configure swap space in which case Ignite will evict entries to the disk 
instead of discarding them.
-[block:callout]
-{
-  "type": "warning",
-  "title": "Swap Space Performance",
-  "body": "Since swap space is on-disk, it is significantly slower than 
on-heap or off-heap memory."
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n  ...\n  <!-- 
Enable swap. -->\n  <property name=\"swapEnabled\" value=\"true\"/> \n  
...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setSwapEnabled(true);\n\nIgniteConfiguration 
cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

Reply via email to