http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/compute-grid/checkpointing.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/compute-grid/checkpointing.md 
b/wiki/documentation/compute-grid/checkpointing.md
deleted file mode 100644
index b7ca7ac..0000000
--- a/wiki/documentation/compute-grid/checkpointing.md
+++ /dev/null
@@ -1,255 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Checkpointing provides an ability to save an intermediate job state. It can be 
useful when long running jobs need to store some intermediate state to protect 
from node failures. Then on restart of a failed node, a job would load the 
saved checkpoint and continue from where it left off. The only requirement for 
job checkpoint state is to implement `java.io.Serializable` interface.
-
-Checkpoints are available through the following methods on `GridTaskSession` 
interface:
-* `ComputeTaskSession.loadCheckpoint(String)`
-* `ComputeTaskSession.removeCheckpoint(String)`
-* `ComputeTaskSession.saveCheckpoint(String, Object)`
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Master Node Failure Protection"
-}
-[/block]
-One important use case for checkpoint that is not readily apparent is to guard 
against failure of the "master" node - the node that started the original 
execution. When master node fails, Ignite doesn’t anywhere to send the 
results of job execution to, and thus the result will be discarded.
-
-To failover this scenario one can store the final result of the job execution 
as a checkpoint and have the logic re-run the entire task in case of a "master" 
node failure. In such case the task re-run will be much faster since all the 
jobs' can start from the saved checkpoints.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Setting Checkpoints"
-}
-[/block]
-Every compute job can periodically *checkpoint* itself by calling 
`ComputeTaskSession.saveCheckpoint(...)` method.
-
-If job did save a checkpoint, then upon beginning of its execution, it should 
check if the checkpoint is available and start executing from the last saved 
checkpoint.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = ignite.compute();\n\ncompute.run(new 
IgniteRunnable() {\n  // Task session (injected on closure instantiation).\n  
@TaskSessionResource\n  private ComputeTaskSession ses;\n\n  @Override \n  
public Object applyx(Object arg) throws GridException {\n    // Try to retrieve 
step1 result.\n    Object res1 = ses.loadCheckpoint(\"STEP1\");\n\n    if (res1 
== null) {\n      res1 = computeStep1(arg); // Do some computation.\n\n      // 
Save step1 result.\n      ses.saveCheckpoint(\"STEP1\", res1);\n    }\n\n    // 
Try to retrieve step2 result.\n    Object res2 = 
ses.loadCheckpoint(\"STEP2\");\n\n    if (res2 == null) {\n      res2 = 
computeStep2(res1); // Do some computation.\n\n      // Save step2 result.\n    
  ses.saveCheckpoint(\"STEP2\", res2);\n    }\n\n    ...\n  }\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "CheckpointSpi"
-}
-[/block]
-In Ignite, checkpointing functionality is provided by `CheckpointSpi` which 
has the following out-of-the-box implementations:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Class",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": 
"[SharedFsCheckpointSpi](#file-system-checkpoint-configuration)\n(default)",
-    "0-1": "This implementation uses a shared file system to store 
checkpoints.",
-    "0-2": "Yes",
-    "1-0": "[CacheCheckpointSpi](#cache-checkpoint-configuration)",
-    "1-1": "This implementation uses a cache to store checkpoints.",
-    "2-0": "[JdbcCheckpointSpi](#database-checkpoint-configuration)",
-    "2-1": "This implementation uses a database to store checkpoints.",
-    "3-1": "This implementation uses Amazon S3 to store checkpoints.",
-    "3-0": "[S3CheckpointSpi](#amazon-s3-checkpoint-configuration)"
-  },
-  "cols": 2,
-  "rows": 4
-}
-[/block]
-`CheckpointSpi` is provided in `IgniteConfiguration` and passed into Ignition 
class at startup. 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "File System Checkpoint Configuration"
-}
-[/block]
-The following configuration parameters can be used to configure 
`SharedFsCheckpointSpi`:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setDirectoryPaths(Collection)`",
-    "0-1": "Sets directory paths to the shared folders where checkpoints are 
stored. The path can either be absolute or relative to the path specified in 
`IGNITE_HOME` environment or system varialble.",
-    "0-2": "`IGNITE_HOME/work/cp/sharedfs`"
-  },
-  "cols": 3,
-  "rows": 1
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\" 
singleton=\"true\">\n  ...\n  <property name=\"checkpointSpi\">\n    <bean 
class=\"org.apache.ignite.spi.checkpoint.sharedfs.SharedFsCheckpointSpi\">\n    
<!-- Change to shared directory path in your environment. -->\n      <property 
name=\"directoryPaths\">\n        <list>\n          
<value>/my/directory/path</value>\n          
<value>/other/directory/path</value>\n        </list>\n      </property>\n    
</bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n 
\nSharedFsCheckpointSpi checkpointSpi = new SharedFsCheckpointSpi();\n \n// 
List of checkpoint directories where all files are stored.\nCollection<String> 
dirPaths = new ArrayList<String>();\n 
\ndirPaths.add(\"/my/directory/path\");\ndirPaths.add(\"/other/directory/path\");\n
 \n// Override default directory 
path.\ncheckpointSpi.setDirectoryPaths(dirPaths);\n \n// Override default 
checkpoint SPI.\ncfg.setCheckpointSpi(checkpointSpi);\n \n// Starts Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cache Checkpoint Configuration"
-}
-[/block]
-`CacheCheckpointSpi` is a cache-based implementation for checkpoint SPI. 
Checkpoint data will be stored in the Ignite data grid in a pre-configured 
cache. 
-
-The following configuration parameters can be used to configure 
`CacheCheckpointSpi`:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setCacheName(String)`",
-    "0-1": "Sets cache name to use for storing checkpoints.",
-    "0-2": "`checkpoints`"
-  },
-  "cols": 3,
-  "rows": 1
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Database Checkpoint Configuration"
-}
-[/block]
-`JdbcCheckpointSpi` uses database to store checkpoints. All checkpoints are 
stored in the database table and are available from all nodes in the grid. Note 
that every node must have access to the database. A job state can be saved on 
one node and loaded on another (e.g., if a job gets preempted on a different 
node after node failure).
-
-The following configuration parameters can be used to configure 
`JdbcCheckpointSpi` (all are optional):
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setDataSource(DataSource)`",
-    "0-1": "Sets DataSource to use for database access.",
-    "0-2": "No value",
-    "1-0": "`setCheckpointTableName(String)`",
-    "1-1": "Sets checkpoint table name.",
-    "1-2": "`CHECKPOINTS`",
-    "2-0": "`setKeyFieldName(String)`",
-    "2-1": "Sets checkpoint key field name.",
-    "2-2": "`NAME`",
-    "3-0": "`setKeyFieldType(String)`",
-    "3-1": "Sets checkpoint key field type. The field should have 
corresponding SQL string type (`VARCHAR` , for example).",
-    "3-2": "`VARCHAR(256)`",
-    "4-0": "`setValueFieldName(String)`",
-    "4-1": "Sets checkpoint value field name.",
-    "4-2": "`VALUE`",
-    "5-0": "`setValueFieldType(String)`",
-    "5-1": "Sets checkpoint value field type. Note, that the field should have 
corresponding SQL BLOB type. The default value is BLOB, won’t work for all 
databases. For example, if using HSQL DB, then the type should be 
`longvarbinary`.",
-    "5-2": "`BLOB`",
-    "6-0": "`setExpireDateFieldName(String)`",
-    "6-1": "Sets checkpoint expiration date field name.",
-    "6-2": "`EXPIRE_DATE`",
-    "7-0": "`setExpireDateFieldType(String)`",
-    "7-1": "Sets checkpoint expiration date field type. The field should have 
corresponding SQL `DATETIME` type.",
-    "7-2": "`DATETIME`",
-    "8-0": "`setNumberOfRetries(int)`",
-    "8-1": "Sets number of retries in case of any database errors.",
-    "8-2": "2",
-    "9-0": "`setUser(String)`",
-    "9-1": "Sets checkpoint database user name. Note that authentication will 
be performed only if both, user and password are set.",
-    "9-2": "No value",
-    "10-0": "`setPassword(String)`",
-    "10-1": "Sets checkpoint database password.",
-    "10-2": "No value"
-  },
-  "cols": 3,
-  "rows": 11
-}
-[/block]
-##Apache DBCP
-[Apache DBCP](http://commons.apache.org/proper/commons-dbcp/) project provides 
various wrappers for data sources and connection pools. You can use these 
wrappers as Spring beans to configure this SPI from Spring configuration file 
or code. Refer to [Apache DBCP](http://commons.apache.org/proper/commons-dbcp/) 
project for more information.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\" 
singleton=\"true\">\n  ...\n  <property name=\"checkpointSpi\">\n    <bean 
class=\"org.apache.ignite.spi.checkpoint.database.JdbcCheckpointSpi\">\n      
<property name=\"dataSource\">\n        <ref 
bean=\"anyPoolledDataSourceBean\"/>\n      </property>\n      <property 
name=\"checkpointTableName\" value=\"CHECKPOINTS\"/>\n      <property 
name=\"user\" value=\"test\"/>\n      <property name=\"password\" 
value=\"test\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "JdbcCheckpointSpi checkpointSpi = new JdbcCheckpointSpi();\n 
\njavax.sql.DataSource ds = ... // Set datasource.\n \n// Set database 
checkpoint SPI 
parameters.\ncheckpointSpi.setDataSource(ds);\ncheckpointSpi.setUser(\"test\");\ncheckpointSpi.setPassword(\"test\");\n
 \nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override default 
checkpoint SPI.\ncfg.setCheckpointSpi(checkpointSpi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Amazon S3 Checkpoint Configuration"
-}
-[/block]
-`S3CheckpointSpi` uses Amazon S3 storage to store checkpoints. For information 
about Amazon S3 visit [http://aws.amazon.com/](http://aws.amazon.com/).
-
-The following configuration parameters can be used to configure 
`S3CheckpointSpi`:
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setAwsCredentials(AWSCredentials)`",
-    "0-1": "Sets AWS credentials to use for storing checkpoints.",
-    "0-2": "No value (must be provided)",
-    "1-0": "`setClientConfiguration(Client)`",
-    "1-1": "Sets AWS client configuration.",
-    "1-2": "No value",
-    "2-0": "`setBucketNameSuffix(String)`",
-    "2-1": "Sets bucket name suffix.",
-    "2-2": "default-bucket"
-  },
-  "cols": 3,
-  "rows": 3
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\" 
singleton=\"true\">\n  ...\n  <property name=\"checkpointSpi\">\n    <bean 
class=\"org.apache.ignite.spi.checkpoint.s3.S3CheckpointSpi\">\n      <property 
name=\"awsCredentials\">\n        <bean 
class=\"com.amazonaws.auth.BasicAWSCredentials\">\n          <constructor-arg 
value=\"YOUR_ACCESS_KEY_ID\" />\n          <constructor-arg 
value=\"YOUR_SECRET_ACCESS_KEY\" />\n        </bean>\n      </property>\n    
</bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new IgniteConfiguration();\n 
\nS3CheckpointSpi spi = new S3CheckpointSpi();\n \nAWSCredentials cred = new 
BasicAWSCredentials(YOUR_ACCESS_KEY_ID, YOUR_SECRET_ACCESS_KEY);\n 
\nspi.setAwsCredentials(cred);\n \nspi.setBucketNameSuffix(\"checkpoints\");\n 
\n// Override default checkpoint SPI.\ncfg.setCheckpointSpi(cpSpi);\n \n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/compute-grid/collocate-compute-and-data.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/compute-grid/collocate-compute-and-data.md 
b/wiki/documentation/compute-grid/collocate-compute-and-data.md
deleted file mode 100644
index e4d064e..0000000
--- a/wiki/documentation/compute-grid/collocate-compute-and-data.md
+++ /dev/null
@@ -1,46 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Collocation of computations with data allow for minimizing data serialization 
within network and can significantly improve performance and scalability of 
your application. Whenever possible, you should alway make best effort to 
colocate your computations with the cluster nodes caching the data that needs 
to be processed.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Affinity Call and Run Methods"
-}
-[/block]
-`affinityCall(...)`  and `affinityRun(...)` methods co-locate jobs with nodes 
on which data is cached. In other words, given a cache name and affinity key 
these methods try to locate the node on which the key resides on Ignite the 
specified Ignite cache, and then execute the job there. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Integer, String> cache = 
ignite.cache(CACHE_NAME);\n\nIgniteCompute compute = ignite.compute();\n\nfor 
(int key = 0; key < KEY_CNT; key++) {\n    // This closure will execute on the 
remote node where\n    // data with the 'key' is located.\n    
compute.affinityRun(CACHE_NAME, key, () -> { \n        // Peek is a local 
memory lookup.\n        System.out.println(\"Co-located [key= \" + key + \", 
value= \" + cache.peek(key) +']');\n    });\n}",
-      "language": "java",
-      "name": "affinityRun"
-    },
-    {
-      "code": "IgniteCache<Integer, String> cache = 
ignite.cache(CACHE_NAME);\n\nIgniteCompute asyncCompute = 
ignite.compute().withAsync();\n\nList<IgniteFuture<?>> futs = new 
ArrayList<>();\n\nfor (int key = 0; key < KEY_CNT; key++) {\n    // This 
closure will execute on the remote node where\n    // data with the 'key' is 
located.\n    asyncCompute.affinityRun(CACHE_NAME, key, () -> { \n        // 
Peek is a local memory lookup.\n        System.out.println(\"Co-located [key= 
\" + key + \", value= \" + cache.peek(key) +']');\n    });\n  \n    
futs.add(asyncCompute.future());\n}\n\n// Wait for all futures to 
complete.\nfuts.stream().forEach(IgniteFuture::get);",
-      "language": "java",
-      "name": "async affinityRun"
-    },
-    {
-      "code": "final IgniteCache<Integer, String> cache = 
ignite.cache(CACHE_NAME);\n\nIgniteCompute compute = ignite.compute();\n\nfor 
(int i = 0; i < KEY_CNT; i++) {\n    final int key = i;\n \n    // This closure 
will execute on the remote node where\n    // data with the 'key' is located.\n 
   compute.affinityRun(CACHE_NAME, key, new IgniteRunnable() {\n        
@Override public void run() {\n            // Peek is a local memory lookup.\n  
          System.out.println(\"Co-located [key= \" + key + \", value= \" + 
cache.peek(key) +']');\n        }\n    });\n}",
-      "language": "java",
-      "name": "java7 affinityRun"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/compute-grid/compute-grid.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/compute-grid/compute-grid.md 
b/wiki/documentation/compute-grid/compute-grid.md
deleted file mode 100644
index a8863f8..0000000
--- a/wiki/documentation/compute-grid/compute-grid.md
+++ /dev/null
@@ -1,73 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Distributed computations are performed in parallel fashion to gain **high 
performance**, **low latency**, and **linear scalability**. Ignite compute grid 
provides a set of simple APIs that allow users distribute computations and data 
processing across multiple computers in the cluster. Distributed parallel 
processing is based on the ability to take any computation and execute it on 
any set of cluster nodes and return the results back.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/zrJB0GshRdS3hLn0QGlI";,
-        "in_memory_compute.png",
-        "400",
-        "301",
-        "#da4204",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-##Features
-  * [Distributed Closure Execution](doc:distributed-closures)
-  * [MapReduce & ForkJoin Processing](doc:compute-tasks)
-  * [Clustered Executor Service](doc:executor-service)
-  * [Collocation of Compute and Data](doc:collocate-compute-and-data) 
-  * [Load Balancing](doc:load-balancing) 
-  * [Fault Tolerance](doc:fault-tolerance)
-  * [Job State Checkpointing](doc:checkpointing) 
-  * [Job Scheduling](doc:job-scheduling) 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCompute"
-}
-[/block]
-`IgniteCompute` interface provides methods for running many types of 
computations over nodes in a cluster or a cluster group. These methods can be 
used to execute Tasks or Closures in distributed fashion.
-
-All jobs and closures are [guaranteed to be executed](doc:fault-tolerance) as 
long as there is at least one node standing. If a job execution is rejected due 
to lack of resources, a failover mechanism is provided. In case of failover, 
the load balancer picks the next available node to execute the job. Here is how 
you can get an `IgniteCompute` instance:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Get compute instance 
over all nodes in the cluster.\nIgniteCompute compute = ignite.compute();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-You can also limit the scope of computations to a [Cluster 
Group](doc:cluster-groups). In this case, computation will only execute on the 
nodes within the cluster group.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignitition.ignite();\n\nClusterGroup 
remoteGroup = ignite.cluster().forRemotes();\n\n// Limit computations only to 
remote nodes (exclude local node).\nIgniteCompute compute = 
ignite.compute(remoteGroup);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/compute-grid/compute-tasks.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/compute-grid/compute-tasks.md 
b/wiki/documentation/compute-grid/compute-tasks.md
deleted file mode 100644
index ef15f15..0000000
--- a/wiki/documentation/compute-grid/compute-tasks.md
+++ /dev/null
@@ -1,122 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-`ComputeTask` is the Ignite abstraction for the simplified in-memory 
MapReduce, which is also very close to ForkJoin paradigm. Pure MapReduce was 
never built for performance and only works well when dealing with off-line 
batch oriented processing (e.g. Hadoop MapReduce). However, when computing on 
data that resides in-memory, real-time low latencies and high throughput 
usually take the highest priority. Also, simplicity of the API becomes very 
important as well. With that in mind, Ignite introduced the `ComputeTask` API, 
which is a light-weight MapReduce (or ForkJoin) implementation.
-[block:callout]
-{
-  "type": "info",
-  "body": "Use `ComputeTask` only when you need fine-grained control over the 
job-to-node mapping, or custom fail-over logic. For all other cases you should 
use simple closure executions on the cluster documented in [Distributed 
Computations](doc:compute) section."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "ComputeTask"
-}
-[/block]
-`ComputeTask` defines jobs to execute on the cluster, and the mappings of 
those jobs to nodes. It also defines how to process (reduce) the job results. 
All `IgniteCompute.execute(...)` methods execute the given task on the grid. 
User applications should implement `map(...)` and `reduce(...)` methods of 
ComputeTask interface.
-
-Tasks are defined by implementing the 2 or 3 methods on `ComputeTask` interface
-
-##Map Method
-Method `map(...)` instantiates the jobs and maps them to worker nodes. The 
method receives the collection of cluster nodes on which the task is run and 
the task argument. The method should return a map with jobs as keys and mapped 
worker nodes as values. The jobs are then sent to the mapped nodes and executed 
there.
-[block:callout]
-{
-  "type": "info",
-  "body": "Refer to [ComputeTaskSplitAdapter](#computetasksplitadapter) for 
simplified implementation of the `map(...)` method."
-}
-[/block]
-##Result Method
-Method `result(...)` is called each time a job completes on some cluster node. 
It receives the result returned by the completed job, as well as the list of 
all the job results received so far. The method should return a 
`ComputeJobResultPolicy` instance, indicating what to do next:
-  * `WAIT` - wait for all remaining jobs to complete (if any)
-  * `REDUCE` - immediately move to reduce step, discarding all the remaining 
jobs and unreceived yet results
-  * `FAILOVER` - failover the job to another node (see Fault Tolerance)
-All the received job results will be available in the `reduce(...)` method as 
well.
-
-##Reduce Method
-Method `reduce(...)` is called on reduce step, when all the jobs have 
completed (or REDUCE result policy was returned from the `result(...)` method). 
The method receives a list with all the completed results and should return a 
final result of the computation. 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Compute Task Adapters"
-}
-[/block]
-It is not necessary to implement all 3 methods of the `ComputeTask` API each 
time you need to define a computation. There is a number of helper classes that 
let you describe only a particular piece of your logic, leaving out all the 
rest to Ignite to handle automatically. 
-
-##ComputeTaskAdapter
-`ComputeTaskAdapter` defines a default implementation of the `result(...)` 
method which returns `FAILOVER` policy if a job threw an exception and `WAIT` 
policy otherwise, thus waiting for all jobs to finish with a result.
-
-##ComputeTaskSplitAdapter
-`ComputeTaskSplitAdapter` extends `ComputeTaskAdapter` and adds capability to 
automatically assign jobs to nodes. It hides the `map(...)` method and adds a 
new `split(...)` method in which user only needs to provide a collection of the 
jobs to be executed (the mapping of those jobs to nodes will be handled 
automatically by the adapter in a load-balanced fashion). 
-
-This adapter is especially useful in homogeneous environments where all nodes 
are equally suitable for executing jobs and the mapping step can be done 
implicitly.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "ComputeJob"
-}
-[/block]
-All jobs that are spawned by a task are implementations of the `ComputeJob` 
interface. The `execute()` method of this interface defines the job logic and 
should return a job result. The `cancel()` method defines the logic in case if 
the job is discarded (for example, in case when task decides to reduce 
immediately or to cancel).
-
-##ComputeJobAdapter
-Convenience adapter which provides a no-op implementation of the `cancel()` 
method.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Example"
-}
-[/block]
-Here is an example of `ComputeTask` and `ComputeJob` implementations.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = ignite.compute();\n\n// Execute task on 
the clustr and wait for its completion.\nint cnt = 
grid.compute().execute(CharacterCountTask.class, \"Hello Grid Enabled 
World!\");\n \nSystem.out.println(\">>> Total number of characters in the 
phrase is '\" + cnt + \"'.\");\n \n/**\n * Task to count non-white-space 
characters in a phrase.\n */\nprivate static class CharacterCountTask extends 
ComputeTaskSplitAdapter<String, Integer> {\n  // 1. Splits the received string 
into to words\n  // 2. Creates a child job for each word\n  // 3. Sends created 
jobs to other nodes for processing. \n  @Override \n  public List<ClusterNode> 
split(List<ClusterNode> subgrid, String arg) {\n    String[] words = 
arg.split(\" \");\n\n    List<ComputeJob> jobs = new 
ArrayList<>(words.length);\n\n    for (final String word : arg.split(\" \")) 
{\n      jobs.add(new ComputeJobAdapter() {\n        @Override public Object 
execute() {\n          System.out.println(\">>> Printing '
 \" + word + \"' on from compute job.\");\n\n          // Return number of 
letters in the word.\n          return word.length();\n        }\n      });\n   
 }\n\n    return jobs;\n  }\n\n  @Override \n  public Integer 
reduce(List<ComputeJobResult> results) {\n    int sum = 0;\n\n    for 
(ComputeJobResult res : results)\n      sum += res.<Integer>getData();\n\n    
return sum;\n  }\n}",
-      "language": "java",
-      "name": "ComputeTaskSplitAdapter"
-    },
-    {
-      "code": "IgniteCompute compute = ignite.compute();\n\n// Execute task on 
the clustr and wait for its completion.\nint cnt = 
grid.compute().execute(CharacterCountTask.class, \"Hello Grid Enabled 
World!\");\n \nSystem.out.println(\">>> Total number of characters in the 
phrase is '\" + cnt + \"'.\");\n \n/**\n * Task to count non-white-space 
characters in a phrase.\n */\nprivate static class CharacterCountTask extends 
ComputeTaskAdapter<String, Integer> {\n    // 1. Splits the received string 
into to words\n    // 2. Creates a child job for each word\n    // 3. Sends 
created jobs to other nodes for processing. \n    @Override \n    public Map<? 
extends ComputeJob, ClusterNode> map(List<ClusterNode> subgrid, String arg) {\n 
       String[] words = arg.split(\" \");\n      \n        Map<ComputeJob, 
ClusterNode> map = new HashMap<>(words.length);\n        \n        
Iterator<ClusterNode> it = subgrid.iterator();\n         \n        for (final 
String word : arg.split(\" \")) {\n      
       // If we used all nodes, restart the iterator.\n            if 
(!it.hasNext())\n                it = subgrid.iterator();\n             \n      
      ClusterNode node = it.next();\n                \n            map.put(new 
ComputeJobAdapter() {\n                @Override public Object execute() {\n    
                System.out.println(\">>> Printing '\" + word + \"' on this node 
from grid job.\");\n                  \n                    // Return number of 
letters in the word.\n                    return word.length();\n               
 }\n             }, node);\n        }\n      \n        return map;\n    }\n \n  
  @Override \n    public Integer reduce(List<ComputeJobResult> results) {\n     
   int sum = 0;\n      \n        for (ComputeJobResult res : results)\n         
   sum += res.<Integer>getData();\n      \n        return sum;\n    }\n}",
-      "language": "java",
-      "name": "ComputeTaskAdapter"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Distributed Task Session"
-}
-[/block]
-Distributed task session is created for every task execution. It is defined by 
`ComputeTaskSession interface. Task session is visible to the task and all the 
jobs spawned by it, so attributes set on a task or on a job can be accessed on 
other jobs.  Task session also allows to receive notifications when attributes 
are set or wait for an attribute to be set.
-
-The sequence in which session attributes are set is consistent across the task 
and all job siblings within it. There will never be a case when one job sees 
attribute A before attribute B, and another job sees attribute B before A.
-
-In the example below, we have all jobs synchronize on STEP1 before moving on 
to STEP2. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = 
ignite.commpute();\n\ncompute.execute(new ComputeTasSplitAdapter<Object, 
Object>() {\n  @Override \n  protected Collection<? extends GridJob> split(int 
gridSize, Object arg)  {\n    Collection<ComputeJob> jobs = new 
LinkedList<>();\n\n    // Generate jobs by number of nodes in the grid.\n    
for (int i = 0; i < gridSize; i++) {\n      jobs.add(new ComputeJobAdapter(arg) 
{\n        // Auto-injected task session.\n        @TaskSessionResource\n       
 private ComputeTaskSession ses;\n        \n        // Auto-injected job 
context.\n        @JobContextResource\n        private ComputeJobContext 
jobCtx;\n\n        @Override \n        public Object execute() {\n          // 
Perform STEP1.\n          ...\n          \n          // Tell other jobs that 
STEP1 is complete.\n          ses.setAttribute(jobCtx.getJobId(), \"STEP1\");\n 
         \n          // Wait for other jobs to complete STEP1.\n          for 
(ComputeJobSibling sibling : ses.getJobSiblin
 gs())\n            ses.waitForAttribute(sibling.getJobId(), \"STEP1\", 0);\n   
       \n          // Move on to STEP2.\n          ...\n        }\n      }\n    
}\n  }\n               \n  @Override \n  public Object 
reduce(List<ComputeJobResult> results) {\n    // No-op.\n    return null;\n  
}\n}, null);\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/compute-grid/distributed-closures.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/compute-grid/distributed-closures.md 
b/wiki/documentation/compute-grid/distributed-closures.md
deleted file mode 100644
index 035d361..0000000
--- a/wiki/documentation/compute-grid/distributed-closures.md
+++ /dev/null
@@ -1,124 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite compute grid allows to broadcast and load-balance any closure within 
the cluster or a cluster group, including plain Java `runnables` and 
`callables`.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Broadcast Methods"
-}
-[/block]
-All `broadcast(...)` methods broadcast a given job to all nodes in the cluster 
or cluster group. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast 
to remote nodes only.\nIgniteCompute compute = 
ignite.compute(ignite.cluster().forRemotes());\n\n// Print out hello message on 
remote nodes in the cluster group.\ncompute.broadcast(() -> 
System.out.println(\"Hello Node: \" + ignite.cluster().localNode().id()));",
-      "language": "java",
-      "name": "broadcast"
-    },
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast 
to remote nodes only and \n// enable asynchronous mode.\nIgniteCompute compute 
= ignite.compute(ignite.cluster().forRemotes()).withAsync();\n\n// Print out 
hello message on remote nodes in the cluster group.\ncompute.broadcast(() -> 
System.out.println(\"Hello Node: \" + 
ignite.cluster().localNode().id()));\n\nComputeTaskFuture<?> fut = 
compute.future():\n\nfut.listenAsync(f -> System.out.println(\"Finished sending 
broadcast job.\"));",
-      "language": "java",
-      "name": "async broadcast"
-    },
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast 
to rmeote nodes only.\nIgniteCompute compute = 
ignite.compute(ignite.cluster.forRemotes());\n\n// Print out hello message on 
remote nodes in projection.\ncompute.broadcast(\n    new IgniteRunnable() {\n   
     @Override public void run() {\n            // Print ID of remote node on 
remote node.\n            System.out.println(\">>> Hello Node: \" + 
ignite.cluster().localNode().id());\n        }\n    }\n);",
-      "language": "java",
-      "name": "java7 broadcast"
-    },
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast 
to remote nodes only and \n// enable asynchronous mode.\nIgniteCompute compute 
= ignite.compute(ignite.cluster.forRemotes()).withAsync();\n\n// Print out 
hello message on remote nodes in the cluster group.\ncompute.broadcast(\n    
new IgniteRunnable() {\n        @Override public void run() {\n            // 
Print ID of remote node on remote node.\n            System.out.println(\">>> 
Hello Node: \" + ignite.cluster().localNode().id());\n        }\n    
}\n);\n\nComputeTaskFuture<?> fut = compute.future():\n\nfut.listenAsync(new 
IgniteInClosure<? super ComputeTaskFuture<?>>() {\n    public void 
apply(ComputeTaskFuture<?> fut) {\n        System.out.println(\"Finished 
sending broadcast job to cluster.\");\n    }\n});",
-      "language": "java",
-      "name": "java7 async broadcast"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Call and Run Methods"
-}
-[/block]
-All `call(...)` and `run(...)` methods execute either individual jobs or 
collections of jobs on the cluster or a cluster group.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Collection<IgniteCallable<Integer>> calls = new 
ArrayList<>();\n \n// Iterate through all words in the sentence and create 
callable jobs.\nfor (String word : \"Count characters using callable\".split(\" 
\"))\n    calls.add(word::length);\n\n// Execute collection of callables on the 
cluster.\nCollection<Integer> res = ignite.compute().call(calls);\n\n// Add all 
the word lengths received from cluster nodes.\nint total = 
res.stream().mapToInt(Integer::intValue).sum(); ",
-      "language": "java",
-      "name": "call"
-    },
-    {
-      "code": "IgniteCompute compute = Ignite.compute();\n\n// Iterate through 
all words and print \n// each word on a different cluster node.\nfor (String 
word : \"Print words on different cluster nodes\".split(\" \"))\n    // Run on 
some cluster node.\n    compute.run(() -> System.out.println(word));",
-      "language": "java",
-      "name": "run"
-    },
-    {
-      "code": "Collection<IgniteCallable<Integer>> calls = new 
ArrayList<>();\n \n// Iterate through all words in the sentence and create 
callable jobs.\nfor (String word : \"Count characters using callable\".split(\" 
\"))\n    calls.add(word::length);\n\n// Enable asynchronous 
mode.\nIgniteCompute asyncCompute = ignite.compute().withAsync();\n\n// 
Asynchronously execute collection of callables on the 
cluster.\nasyncCompute.call(calls);\n\nasyncCompute.future().listenAsync(fut -> 
{\n    // Total number of characters.\n    int total = 
fut.get().stream().mapToInt(Integer::intValue).sum(); \n  \n    
System.out.println(\"Total number of characters: \" + total);\n});",
-      "language": "java",
-      "name": "async call"
-    },
-    {
-      "code": "IgniteCompute asyncCompute = 
ignite.compute().withAsync();\n\nCollection<ComputeTaskFuture<?>> futs = new 
ArrayList<>();\n\n// Iterate through all words and print \n// each word on a 
different cluster node.\nfor (String word : \"Print words on different cluster 
nodes\".split(\" \")) {\n    // Asynchronously run on some cluster node.\n    
asyncCompute.run(() -> System.out.println(word));\n\n    
futs.add(asyncCompute.future());\n}\n\n// Wait for completion of all 
futures.\nfuts.stream().forEach(ComputeTaskFuture::get);",
-      "language": "java",
-      "name": "async run"
-    },
-    {
-      "code": "Collection<IgniteCallable<Integer>> calls = new 
ArrayList<>();\n \n// Iterate through all words in the sentence and create 
callable jobs.\nfor (final String word : \"Count characters using 
callable\".split(\" \")) {\n    calls.add(new GridCallable<Integer>() {\n       
 @Override public Integer call() throws Exception {\n            return 
word.length(); // Return word length.\n        }\n    });\n}\n \n// Execute 
collection of callables on the cluster.\nCollection<Integer> res = 
ignite.compute().call(calls);\n\nint total = 0;\n\n// Total number of 
characters.\n// Looks much better in Java 8.\nfor (Integer i : res)\n  total += 
i;",
-      "language": "java",
-      "name": "java7 call"
-    },
-    {
-      "code": "IgniteCompute asyncCompute = 
ignite.compute().withAsync();\n\nCollection<ComputeTaskFuture<?>> futs = new 
ArrayList<>();\n\n// Iterate through all words and print\n// each word on a 
different cluster node.\nfor (String word : \"Print words on different cluster 
nodes\".split(\" \")) {\n    // Asynchronously run on some cluster node.\n    
asyncCompute.run(new IgniteRunnable() {\n        @Override public void run() 
{\n            System.out.println(word);\n        }\n    });\n\n    
futs.add(asyncCompute.future());\n}\n\n// Wait for completion of all 
futures.\nfor (ComputeTaskFuture<?> f : futs)\n  f.get();",
-      "language": "java",
-      "name": "java7 async run"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Apply Methods"
-}
-[/block]
-A closure is a block of code that encloses its body and any outside variables 
used inside of it as a function object. You can then pass such function object 
anywhere you can pass a variable and execute it. All apply(...) methods execute 
closures on the cluster. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute  = ignite.compute();\n\n// Execute 
closure on all cluster nodes.\nCollection<Integer> res = 
ignite.compute().apply(\n    String::length,\n    Arrays.asList(\"Count 
characters using closure\".split(\" \"))\n);\n     \n// Add all the word 
lengths received from cluster nodes.\nint total = 
res.stream().mapToInt(Integer::intValue).sum(); ",
-      "language": "java",
-      "name": "apply"
-    },
-    {
-      "code": "// Enable asynchronous mode.\nIgniteCompute asyncCompute = 
ignite.compute().withAsync();\n\n// Execute closure on all cluster nodes.\n// 
If the number of closures is less than the number of \n// parameters, then 
Ignite will create as many closures \n// as there are 
parameters.\nCollection<Integer> res = ignite.compute().apply(\n    
String::length,\n    Arrays.asList(\"Count characters using closure\".split(\" 
\"))\n);\n     \nasyncCompute.future().listenAsync(fut -> {\n    // Total 
number of characters.\n    int total = 
fut.get().stream().mapToInt(Integer::intValue).sum(); \n  \n    
System.out.println(\"Total number of characters: \" + total);\n});",
-      "language": "java",
-      "name": "async apply"
-    },
-    {
-      "code": "// Execute closure on all cluster nodes.\n// If the number of 
closures is less than the number of \n// parameters, then Ignite will create as 
many closures \n// as there are parameters.\nCollection<Integer> res = 
ignite.compute().apply(\n    new IgniteClosure<String, Integer>() {\n        
@Override public Integer apply(String word) {\n            // Return number of 
letters in the word.\n            return word.length();\n        }\n    },\n    
Arrays.asList(\"Count characters using closure\".split(\" \"))\n).get();\n     
\nint sum = 0;\n \n// Add up individual word lengths received from remote 
nodes\nfor (int len : res)\n    sum += len;",
-      "language": "java",
-      "name": "java7 apply"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/compute-grid/executor-service.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/compute-grid/executor-service.md 
b/wiki/documentation/compute-grid/executor-service.md
deleted file mode 100644
index 3ea86fc..0000000
--- a/wiki/documentation/compute-grid/executor-service.md
+++ /dev/null
@@ -1,40 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-[IgniteCompute](doc:compute) provides a convenient API for executing 
computations on the cluster. However, you can also work directly with standard 
`ExecutorService` interface from JDK. Ignite provides a cluster-enabled 
implementation of `ExecutorService` and automatically executes all the 
computations in load-balanced fashion within the cluster. Your computations 
also become fault-tolerant and are guaranteed to execute as long as there is at 
least one node left. You can think of it as a distributed cluster-enabled 
thread pool. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Get cluster-enabled executor service.\nExecutorService exec 
= ignite.executorService();\n \n// Iterate through all words in the sentence 
and create jobs.\nfor (final String word : \"Print words using 
runnable\".split(\" \")) {\n  // Execute runnable on some node.\n  
exec.submit(new IgniteRunnable() {\n    @Override public void run() {\n      
System.out.println(\">>> Printing '\" + word + \"' on this node from grid 
job.\");\n    }\n  });\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
- 
-You can also limit the job execution with some subset of nodes from your grid:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Cluster group for nodes where the attribute 'worker' is 
defined.\nClusterGroup workerGrp = ignite.cluster().forAttribute(\"ROLE\", 
\"worker\");\n\n// Get cluster-enabled executor service for the above cluster 
group.\nExecutorService exec = icnite.executorService(workerGrp);\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/compute-grid/fault-tolerance.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/compute-grid/fault-tolerance.md 
b/wiki/documentation/compute-grid/fault-tolerance.md
deleted file mode 100644
index 1eed62d..0000000
--- a/wiki/documentation/compute-grid/fault-tolerance.md
+++ /dev/null
@@ -1,96 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite supports automatic job failover. In case of a node crash, jobs are 
automatically transferred to other available nodes for re-execution. However, 
in Ignite you can also treat any job result as a failure as well. The worker 
node can still be alive, but it may be running low on CPU, I/O, disk space, 
etc. There are many conditions that may result in a failure within your 
application and you can trigger a failover. Moreover, you have the ability to 
choose to which node a job should be failed over to, as it could be different 
for different applications or different computations within the same 
application.
-
-The `FailoverSpi` is responsible for handling the selection of a new node for 
the execution of a failed job. `FailoverSpi` inspects the failed job and the 
list of all available grid nodes on which the job execution can be retried. It 
ensures that the job is not re-mapped to the same node it had failed on. 
Failover is triggered when the method `ComputeTask.result(...)` returns the 
`ComputeJobResultPolicy.FAILOVER` policy. Ignite comes with a number of 
built-in customizable Failover SPI implementations.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "At Least Once Guarantee"
-}
-[/block]
-As long as there is at least one node standing, no job will ever be lost.
-
-By default, Ignite will failover all jobs from stopped or crashed nodes 
automatically. For custom failover behavior, you should implement 
`ComputeTask.result()` method. The example below triggers failover whenever a 
job throws any `IgniteException` (or its subclasses):
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class MyComputeTask extends 
ComputeTaskSplitAdapter<String, String> {\n    ...\n      \n    @Override \n    
public ComputeJobResultPolicy result(ComputeJobResult res, 
List<ComputeJobResult> rcvd) {\n        IgniteException err = 
res.getException();\n     \n        if (err != null)\n            return 
ComputeJobResultPolicy.FAILOVER;\n    \n        // If there is no exception, 
wait for all job results.\n        return ComputeJobResultPolicy.WAIT;\n    }\n 
 \n    ...\n}\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Closure Failover"
-}
-[/block]
-Closure failover is by default governed by `ComputeTaskAdapter`, which is 
triggered if a remote node either crashes or rejects closure execution. This 
default behavior may be overridden by using `IgniteCompute.withNoFailover()` 
method, which creates an instance of `IgniteCompute` with a **no-failover 
flag** set on it . Here is an example:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = 
ignite.compute().withNoFailover();\n\ncompute.apply(() -> {\n    // Do 
something\n    ...\n}, \"Some argument\");\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "AlwaysFailOverSpi"
-}
-[/block]
-`AlwaysFailoverSpi` always reroutes a failed job to another node. Note, that 
at first an attempt will be made to reroute the failed job to a node that the 
task was not executed on. If no such nodes are available, then an attempt will 
be made to reroute the failed job to the nodes that may be running other jobs 
from the same task. If none of the above attempts succeeded, then the job will 
not be failed over and null will be returned.
-
-The following configuration parameters can be used to configure 
`AlwaysFailoverSpi`.
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setMaximumFailoverAttempts(int)`",
-    "0-1": "Sets the maximum number of attempts to fail-over a failed job to 
other nodes.",
-    "0-2": "5"
-  },
-  "cols": 3,
-  "rows": 1
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean id=\"grid.custom.cfg\" 
class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  
<bean class=\"org.apache.ignite.spi.failover.always.AlwaysFailoverSpi\">\n    
<property name=\"maximumFailoverAttempts\" value=\"5\"/>\n  </bean>\n  
...\n</bean>\n",
-      "language": "xml"
-    },
-    {
-      "code": "AlwaysFailoverSpi failSpi = new AlwaysFailoverSpi();\n 
\nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override maximum 
failover attempts.\nfailSpi.setMaximumFailoverAttempts(5);\n \n// Override the 
default failover SPI.\ncfg.setFailoverSpi(failSpi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/compute-grid/job-scheduling.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/compute-grid/job-scheduling.md 
b/wiki/documentation/compute-grid/job-scheduling.md
deleted file mode 100644
index 568cbc7..0000000
--- a/wiki/documentation/compute-grid/job-scheduling.md
+++ /dev/null
@@ -1,86 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-In Ignite, jobs are mapped to cluster nodes during initial task split or 
closure execution on the  client side. However, once jobs arrive to the 
designated nodes, then need to be ordered for execution. By default, jobs are 
submitted to a thread pool and are executed in random order.  However, if you 
need to have a fine-grained control over job ordering, you can enable 
`CollisionSpi`.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "FIFO Ordering"
-}
-[/block]
-`FifoQueueCollisionSpi` allows a certain number of jobs in first-in first-out 
order to proceed without interruptions. All other jobs will be put on a waiting 
list until their turn.
-
-Number of parallel jobs is controlled by `parallelJobsNumber` configuration 
parameter. Default is number of cores times 2.
-
-##One at a Time
-Note that by setting `parallelJobsNumber` to 1, you can guarantee that all 
jobs will be executed one-at-a-time, and no two jobs will be executed 
concurrently.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\" 
singleton=\"true\">\n  ...\n  <property name=\"collisionSpi\">\n    <bean 
class=\"org.apache.ignite.spi.collision.fifoqueue.FifoQueueCollisionSpi\">\n    
  <!-- Execute one job at a time. -->\n      <property 
name=\"parallelJobsNumber\" value=\"1\"/>\n    </bean>\n  </property>\n  
...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "FifoQueueCollisionSpi colSpi = new FifoQueueCollisionSpi();\n 
\n// Execute jobs sequentially, one at a time, \n// by setting parallel job 
number to 1.\ncolSpi.setParallelJobsNumber(1);\n \nIgniteConfiguration cfg = 
new IgniteConfiguration();\n \n// Override default collision 
SPI.\ncfg.setCollisionSpi(colSpi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java",
-      "name": null
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Priority Ordering"
-}
-[/block]
-`PriorityQueueCollisionSpi` allows to assign priorities to individual jobs, so 
jobs with higher priority will be executed ahead of lower priority jobs. 
-
-#Task Priorities
-Task priorities are set in the [task 
session](/docs/compute-tasks#distributed-task-session) via `grid.task.priority` 
attribute. If no priority has been assigned to a task, then default priority of 
0 is used.
-
-Below is an example showing how task priority can be set. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class MyUrgentTask extends 
ComputeTaskSplitAdapter<Object, Object> {\n  // Auto-injected task session.\n  
@TaskSessionResource\n  private GridTaskSession taskSes = null;\n \n  
@Override\n  protected Collection<ComputeJob> split(int gridSize, Object arg) 
{\n    ...\n    // Set high task priority.\n    
taskSes.setAttribute(\"grid.task.priority\", 10);\n \n    List<ComputeJob> jobs 
= new ArrayList<>(gridSize);\n    \n    for (int i = 1; i <= gridSize; i++) {\n 
     jobs.add(new GridJobAdapter() {\n        ...\n      });\n    }\n    ...\n  
    \n    // These jobs will be executed with higher priority.\n    return 
jobs;\n  }\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-Just like with [FIFO Ordering](#fifo-ordering), number of parallel jobs is 
controlled by `parallelJobsNumber` configuration parameter. 
-
-##Configuration
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\" 
singleton=\"true\">\n\t...\n\t<property name=\"collisionSpi\">\n\t\t<bean 
class=\"org.apache.ignite.spi.collision.priorityqueue.PriorityQueueCollisionSpi\">\n
      <!-- \n        Change the parallel job number if needed.\n        Default 
is number of cores times 2.\n      -->\n\t\t\t<property 
name=\"parallelJobsNumber\" 
value=\"5\"/>\n\t\t</bean>\n\t</property>\n\t...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "PriorityQueueCollisionSpi colSpi = new 
PriorityQueueCollisionSpi();\n\n// Change the parallel job number if 
needed.\n// Default is number of cores times 
2.\ncolSpi.setParallelJobsNumber(5);\n \nIgniteConfiguration cfg = new 
IgniteConfiguration();\n \n// Override default collision 
SPI.\ncfg.setCollisionSpi(colSpi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java",
-      "name": ""
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/compute-grid/load-balancing.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/compute-grid/load-balancing.md 
b/wiki/documentation/compute-grid/load-balancing.md
deleted file mode 100644
index e249cf2..0000000
--- a/wiki/documentation/compute-grid/load-balancing.md
+++ /dev/null
@@ -1,76 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Load balancing component balances job distribution among cluster nodes. In 
Ignite load balancing is achieved via `LoadBalancingSpi` which controls load on 
all nodes and makes sure that every node in the cluster is equally loaded. In 
homogeneous environments with homogeneous tasks load balancing is achieved by 
random or round-robin policies. However, in many other use cases, especially 
under uneven load, more complex adaptive load-balancing policies may be needed.
-[block:callout]
-{
-  "type": "info",
-  "body": "Note that load balancing is triggered whenever your jobs are not 
collocated with data or have no real preference on which node to execute. If 
[Collocation Of Compute and Data](doc:collocate-compute-and-data) is used, then 
data affinity takes priority over load balancing.",
-  "title": "Data Affinity"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Round-Robin Load Balancing"
-}
-[/block]
-`RoundRobinLoadBalancingSpi` iterates through nodes in round-robin fashion and 
picks the next sequential node. Two modes of operation are supported: per-task 
and global.
-
-##Per-Task Mode
-When configured in per-task mode, implementation will pick a random node at 
the beginning of every task execution and then sequentially iterate through all 
nodes in topology starting from the picked node. This is the default 
configuration For cases when split size is equal to the number of nodes, this 
mode guarantees that all nodes will participate in the split.
-
-##Global Mode
-When configured in global mode, a single sequential queue of nodes is 
maintained for all tasks and the next node in the queue is picked every time. 
In this mode (unlike in per-task mode) it is possible that even if split size 
may be equal to the number of nodes, some jobs within the same task will be 
assigned to the same node whenever multiple tasks are executing concurrently.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean id=\"grid.custom.cfg\" 
class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  
<property name=\"loadBalancingSpi\">\n    <bean 
class=\"org.apache.ignite.spi.loadbalancing.roundrobin.RoundRobinLoadBalancingSpi\">\n
      <!-- Set to per-task round-robin mode (this is default behavior). -->\n   
   <property name=\"perTask\" value=\"true\"/>\n    </bean>\n  </property>\n  
...\n</bean>",
-      "language": "xml",
-      "name": null
-    },
-    {
-      "code": "RoundRobinLoadBalancingSpi = new 
RoundRobinLoadBalancingSpi();\n \n// Configure SPI to use per-task mode (this 
is default behavior).\nspi.setPerTask(true);\n \nIgniteConfiguration cfg = new 
IgniteConfiguration();\n \n// Override default load balancing 
SPI.\ncfg.setLoadBalancingSpi(spi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Random and Weighted Load Balancing"
-}
-[/block]
-`WeightedRandomLoadBalancingSpi` picks a random node for job execution by 
default. You can also optionally assign weights to nodes, so nodes with larger 
weights will end up getting proportionally more jobs routed to them. By default 
all nodes get equal weight of 10.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean id=\"grid.custom.cfg\" 
class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  
<property name=\"loadBalancingSpi\">\n    <bean 
class=\"org.apache.ignite.spi.loadbalancing.weightedrandom.WeightedRandomLoadBalancingSpi\">\n
      <property name=\"useWeights\" value=\"true\"/>\n      <property 
name=\"nodeWeight\" value=\"10\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "WeightedRandomLoadBalancingSpi = new 
WeightedRandomLoadBalancingSpi();\n \n// Configure SPI to used weighted random 
load balancing.\nspi.setUseWeights(true);\n \n// Set weight for the local 
node.\nspi.setWeight(10);\n \nIgniteConfiguration cfg = new 
IgniteConfiguration();\n \n// Override default load balancing 
SPI.\ncfg.setLoadBalancingSpi(spi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/data-grid/affinity-collocation.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/affinity-collocation.md 
b/wiki/documentation/data-grid/affinity-collocation.md
deleted file mode 100644
index 6bbfcc5..0000000
--- a/wiki/documentation/data-grid/affinity-collocation.md
+++ /dev/null
@@ -1,95 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Given that the most common ways to cache data is in `PARTITIONED` caches, 
collocating compute with data or data with data can significantly improve 
performance and scalability of your application.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Collocate Data with Data"
-}
-[/block]
-In many cases it is beneficial to colocate different cache keys together if 
they will be accessed together. Quite often your business logic will require 
access to more than one cache key. By collocating them together you can make 
sure that all keys with the same `affinityKey` will be cached on the same 
processing node, hence avoiding costly network trips to fetch data from remote 
nodes.
-
-For example, let's say you have `Person` and `Company` objects and you want to 
collocate `Person` objects with `Company` objects for which this person works. 
To achieve that, cache key used to cache `Person` objects should have a field 
or method annotated with `@CacheAffinityKeyMapped` annotation, which will 
provide the value of the company key for collocation. For convenience, you can 
also optionally use `CacheAffinityKey` class
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class PersonKey {\n    // Person ID used to identify a 
person.\n    private String personId;\n \n    // Company ID which will be used 
for affinity.\n    @GridCacheAffinityKeyMapped\n    private String companyId;\n 
   ...\n}\n\n// Instantiate person keys with the same company ID which is used 
as affinity key.\nObject personKey1 = new PersonKey(\"myPersonId1\", 
\"myCompanyId\");\nObject personKey2 = new PersonKey(\"myPersonId2\", 
\"myCompanyId\");\n \nPerson p1 = new Person(personKey1, ...);\nPerson p2 = new 
Person(personKey2, ...);\n \n// Both, the company and the person objects will 
be cached on the same node.\ncache.put(\"myCompanyId\", new 
Company(..));\ncache.put(personKey1, p1);\ncache.put(personKey2, p2);",
-      "language": "java",
-      "name": "using PersonKey"
-    },
-    {
-      "code": "Object personKey1 = new CacheAffinityKey(\"myPersonId1\", 
\"myCompanyId\");\nObject personKey2 = new CacheAffinityKey(\"myPersonId2\", 
\"myCompanyId\");\n \nPerson p1 = new Person(personKey1, ...);\nPerson p2 = new 
Person(personKey2, ...);\n \n// Both, the company and the person objects will 
be cached on the same node.\ncache.put(\"myCompanyId\", new 
Company(..));\ncache.put(personKey1, p1);\ncache.put(personKey2, p2);",
-      "language": "java",
-      "name": "using CacheAffinityKey"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "title": "SQL Joins",
-  "body": "When performing [SQL distributed 
joins](/docs/cache-queries#sql-queries) over data residing in partitioned 
caches, you must make sure that the join-keys are collocated."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Collocating Compute with Data"
-}
-[/block]
-It is also possible to route computations to the nodes where the data is 
cached. This concept is known as Collocation Of Computations And Data. It 
allows to route whole units of work to a certain node. 
-
-To collocate compute with data you should use `IgniteCompute.affinityRun(...)` 
and `IgniteCompute.affinityCall(...)` methods.
-
-Here is how you can collocate your computation with the same cluster node on 
which company and persons from the example above are cached.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "String companyId = \"myCompanyId\";\n \n// Execute Runnable on 
the node where the key is cached.\nignite.compute().affinityRun(\"myCache\", 
companyId, () -> {\n  Company company = cache.get(companyId);\n\n  // Since we 
collocated persons with the company in the above example,\n  // access to the 
persons objects is local.\n  Person person1 = cache.get(personKey1);\n  Person 
person2 = cache.get(personKey2);\n  ...  \n});",
-      "language": "java",
-      "name": "affinityRun"
-    },
-    {
-      "code": "final String companyId = \"myCompanyId\";\n \n// Execute 
Runnable on the node where the key is 
cached.\nignite.compute().affinityRun(\"myCache\", companyId, new 
IgniteRunnable() {\n  @Override public void run() {\n    Company company = 
cache.get(companyId);\n    \n    Person person1 = cache.get(personKey1);\n    
Person person2 = cache.get(personKey2);\n    ...\n  }\n};",
-      "language": "java",
-      "name": "java7 affinityRun"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCompute vs EntryProcessor"
-}
-[/block]
-Both, `IgniteCompute.affinityRun(...)` and `IgniteCache.invoke(...)` methods 
offer ability to collocate compute and data. The main difference is that 
`invoke(...)` methods is atomic and executes while holding a lock on a key. You 
should not access other keys from within the `EntryProcessor` logic as it may 
cause a deadlock. 
-
- `affinityRun(...)` and `affinityCall(...)`, on the other hand, do not hold 
any locks. For example, it is absolutely legal to start multiple transactions 
or execute cache queries from these methods without worrying about deadlocks. 
In this case Ignite will automatically detect that the processing is collocated 
and will employ a light-weight 1-Phase-Commit optimization for transactions 
(instead of 2-Phase-Commit).
-[block:callout]
-{
-  "type": "info",
-  "body": "See [JCache EntryProcessor](/docs/jcache#entryprocessor) 
documentation for more information about `IgniteCache.invoke(...)` method."
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/f54fc36c/wiki/documentation/data-grid/automatic-db-integration.md
----------------------------------------------------------------------
diff --git a/wiki/documentation/data-grid/automatic-db-integration.md 
b/wiki/documentation/data-grid/automatic-db-integration.md
deleted file mode 100644
index 27ee9bd..0000000
--- a/wiki/documentation/data-grid/automatic-db-integration.md
+++ /dev/null
@@ -1,119 +0,0 @@
-<!--
-  Licensed to the Apache Software Foundation (ASF) under one or more
-  contributor license agreements.  See the NOTICE file distributed with
-  this work for additional information regarding copyright ownership.
-  The ASF licenses this file to You under the Apache License, Version 2.0
-  (the "License"); you may not use this file except in compliance with
-  the License.  You may obtain a copy of the License at
-
-       http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
--->
-
-Ignite supports integration with databases 'out-of-the-box' by 
`org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore` class that implements 
`org.apache.ignite.cache.store.CacheStore` interface.
-
-Ignite provides utility that will read database metadata and generate POJO 
classes and XML configuration.
-
-Utility can be started by script:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "$ bin/ignite-schema-load.sh",
-      "language": "shell"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Connect to database"
-}
-[/block]
-JDBC drivers **are not supplied** with utility. You should download (and 
install if needed) appropriate JDBC driver for your database.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/jKCMIgmTi2uqSqgkgiTQ";,
-        "ignite-schema-load-01.png",
-        "650",
-        "650",
-        "#d6363a",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Generate XML configuration and POJO classes"
-}
-[/block]
-Select tables you want to map to POJO classes and click 'Generate' button.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/13YM8mBRXaTB8yXWJkWI";,
-        "ignite-schema-load-02.png",
-        "650",
-        "650",
-        "",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Produced output"
-}
-[/block]
-Utility will generate POJO classes,  XML configuration, and java code snippet 
of cache configuration by code.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "/**\n * PersonKey definition.\n *\n * Code generated by Apache 
Ignite Schema Load utility: 03/03/2015.\n */\npublic class PersonKey implements 
Serializable {\n    /** */\n    private static final long serialVersionUID = 
0L;\n\n    /** Value for id. */\n    private int id;\n\n    /**\n     * Gets 
id.\n     *\n     * @return Value for id.\n     */\n    public int getId() {\n  
      return id;\n    }\n\n    /**\n     * Sets id.\n     *\n     * @param id 
New value for id.\n     */\n    public void setId(int id) {\n        this.id = 
id;\n    }\n\n    /** {@inheritDoc} */\n    @Override public boolean 
equals(Object o) {\n        if (this == o)\n            return true;\n\n        
if (!(o instanceof PersonKey))\n            return false;\n\n        PersonKey 
that = (PersonKey)o;\n\n        if (id != that.id)\n            return 
false;\n\n        return true;\n    }\n\n    /** {@inheritDoc} */\n    
@Override public int hashCode() {\n        int res = id;\n\n        return res
 ;\n    }\n\n    /** {@inheritDoc} */\n    @Override public String toString() 
{\n        return \"PersonKey [id=\" + id +\n            \"]\";\n    }\n}",
-      "language": "java",
-      "name": "POJO Key class"
-    },
-    {
-      "code": "/**\n * Person definition.\n *\n * Code generated by Apache 
Ignite Schema Load utility: 03/03/2015.\n */\npublic class Person implements 
Serializable {\n    /** */\n    private static final long serialVersionUID = 
0L;\n\n    /** Value for id. */\n    private int id;\n\n    /** Value for 
orgId. */\n    private Integer orgId;\n\n    /** Value for name. */\n    
private String name;\n\n    /**\n     * Gets id.\n     *\n     * @return Value 
for id.\n     */\n    public int getId() {\n        return id;\n    }\n\n    
/**\n     * Sets id.\n     *\n     * @param id New value for id.\n     */\n    
public void setId(int id) {\n        this.id = id;\n    }\n\n    /**\n     * 
Gets orgId.\n     *\n     * @return Value for orgId.\n     */\n    public 
Integer getOrgId() {\n        return orgId;\n    }\n\n    /**\n     * Sets 
orgId.\n     *\n     * @param orgId New value for orgId.\n     */\n    public 
void setOrgId(Integer orgId) {\n        this.orgId = orgId;\n    }\n\n    /**\n 
  
   * Gets name.\n     *\n     * @return Value for name.\n     */\n    public 
String getName() {\n        return name;\n    }\n\n    /**\n     * Sets name.\n 
    *\n     * @param name New value for name.\n     */\n    public void 
setName(String name) {\n        this.name = name;\n    }\n\n    /** 
{@inheritDoc} */\n    @Override public boolean equals(Object o) {\n        if 
(this == o)\n            return true;\n\n        if (!(o instanceof Person))\n  
          return false;\n\n        Person that = (Person)o;\n\n        if (id 
!= that.id)\n            return false;\n\n        if (orgId != null ? 
!orgId.equals(that.orgId) : that.orgId != null)\n            return false;\n\n  
      if (name != null ? !name.equals(that.name) : that.name != null)\n         
   return false;\n\n        return true;\n    }\n\n    /** {@inheritDoc} */\n   
 @Override public int hashCode() {\n        int res = id;\n\n        res = 31 * 
res + (orgId != null ? orgId.hashCode() : 0);\n\n        res = 31 * res + (
 name != null ? name.hashCode() : 0);\n\n        return res;\n    }\n\n    /** 
{@inheritDoc} */\n    @Override public String toString() {\n        return 
\"Person [id=\" + id +\n            \", orgId=\" + orgId +\n            \", 
name=\" + name +\n            \"]\";\n    }\n}",
-      "language": "java",
-      "name": "POJO Value class"
-    },
-    {
-      "code": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<beans 
xmlns=\"http://www.springframework.org/schema/beans\"\n       
xmlns:util=\"http://www.springframework.org/schema/util\"\n       
xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n       
xsi:schemaLocation=\"http://www.springframework.org/schema/beans\n              
             http://www.springframework.org/schema/beans/spring-beans.xsd\n     
                      http://www.springframework.org/schema/util\n              
             http://www.springframework.org/schema/util/spring-util.xsd\";>\n    
<bean class=\"org.apache.ignite.cache.CacheTypeMetadata\">\n        <property 
name=\"databaseSchema\" value=\"PUBLIC\"/>\n        <property 
name=\"databaseTable\" value=\"PERSON\"/>\n        <property name=\"keyType\" 
value=\"org.apache.ignite.examples.datagrid.store.model.PersonKey\"/>\n        
<property name=\"valueType\" 
value=\"org.apache.ignite.examples.datagrid.store.model.Person\"/>\n        
<property name=\"
 keyFields\">\n            <list>\n                <bean 
class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    
<property name=\"databaseName\" value=\"ID\"/>\n                    <property 
name=\"databaseType\">\n                        <util:constant 
static-field=\"java.sql.Types.INTEGER\"/>\n                    </property>\n    
                <property name=\"javaName\" value=\"id\"/>\n                    
<property name=\"javaType\" value=\"int\"/>\n                </bean>\n          
  </list>\n        </property>\n        <property name=\"valueFields\">\n       
     <list>\n                <bean 
class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    
<property name=\"databaseName\" value=\"ID\"/>\n                    <property 
name=\"databaseType\">\n                        <util:constant 
static-field=\"java.sql.Types.INTEGER\"/>\n                    </property>\n    
                <property name=\"javaName\" value=\"id\"/>\n      
               <property name=\"javaType\" value=\"int\"/>\n                
</bean>\n                <bean 
class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    
<property name=\"databaseName\" value=\"ORG_ID\"/>\n                    
<property name=\"databaseType\">\n                        <util:constant 
static-field=\"java.sql.Types.INTEGER\"/>\n                    </property>\n    
                <property name=\"javaName\" value=\"orgId\"/>\n                 
   <property name=\"javaType\" value=\"java.lang.Integer\"/>\n                
</bean>\n                <bean 
class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    
<property name=\"databaseName\" value=\"NAME\"/>\n                    <property 
name=\"databaseType\">\n                        <util:constant 
static-field=\"java.sql.Types.VARCHAR\"/>\n                    </property>\n    
                <property name=\"javaName\" value=\"name\"/>\n                  
  <property name
 =\"javaType\" value=\"java.lang.String\"/>\n                </bean>\n          
  </list>\n        </property>\n    </bean>\n</beans>",
-      "language": "xml",
-      "name": "XML Configuration"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new 
IgniteConfiguration();\n...\nCacheConfiguration ccfg = new 
CacheConfiguration<>();\n\nDataSource dataSource = null; // TODO: Create data 
source for your database.\n\n// Create store. \nCacheJdbcPojoStore store = new 
CacheJdbcPojoStore();\nstore.setDataSource(dataSource);\n\n// Create store 
factory. \nccfg.setCacheStoreFactory(new 
FactoryBuilder.SingletonFactory<>(store));\n\n// Configure cache to use store. 
\nccfg.setReadThrough(true);\nccfg.setWriteThrough(true);\n\ncfg.setCacheConfiguration(ccfg);\n\n//
 Configure cache types. \nCollection<CacheTypeMetadata> meta = new 
ArrayList<>();\n\n// PERSON.\nCacheTypeMetadata type = new 
CacheTypeMetadata();\ntype.setDatabaseSchema(\"PUBLIC\");\ntype.setDatabaseTable(\"PERSON\");\ntype.setKeyType(\"org.apache.ignite.examples.datagrid.store.model.PersonKey\");\ntype.setValueType(\"org.apache.ignite.examples.datagrid.store.model.Person\");\n\n//
 Key fields for PERSON.\nCollection<CacheTypeFieldMetada
 ta> keys = new ArrayList<>();\nkeys.add(new CacheTypeFieldMetadata(\"ID\", 
java.sql.Types.INTEGER,\"id\", int.class));\ntype.setKeyFields(keys);\n\n// 
Value fields for PERSON.\nCollection<CacheTypeFieldMetadata> vals = new 
ArrayList<>();\nvals.add(new CacheTypeFieldMetadata(\"ID\", 
java.sql.Types.INTEGER,\"id\", int.class));\nvals.add(new 
CacheTypeFieldMetadata(\"ORG_ID\", java.sql.Types.INTEGER,\"orgId\", 
Integer.class));\nvals.add(new CacheTypeFieldMetadata(\"NAME\", 
java.sql.Types.VARCHAR,\"name\", 
String.class));\ntype.setValueFields(vals);\n...\n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java",
-      "name": "Java snippet"
-    }
-  ]
-}
-[/block]
-Copy generated POJO java classes to you project source folder.
-
-Copy declaration of CacheTypeMetadata from generated XML file and paste to 
your project XML configuration file under appropriate CacheConfiguration root.
-
-Or paste snippet with cache configuration into appropriate java class in your 
project.
\ No newline at end of file

Reply via email to