http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/compute-grid/collocate-compute-and-data.md
----------------------------------------------------------------------
diff --git a/docs/wiki/compute-grid/collocate-compute-and-data.md 
b/docs/wiki/compute-grid/collocate-compute-and-data.md
deleted file mode 100755
index cb69d25..0000000
--- a/docs/wiki/compute-grid/collocate-compute-and-data.md
+++ /dev/null
@@ -1,29 +0,0 @@
-Collocation of computations with data allow for minimizing data serialization 
within network and can significantly improve performance and scalability of 
your application. Whenever possible, you should alway make best effort to 
colocate your computations with the cluster nodes caching the data that needs 
to be processed.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Affinity Call and Run Methods"
-}
-[/block]
-`affinityCall(...)`  and `affinityRun(...)` methods co-locate jobs with nodes 
on which data is cached. In other words, given a cache name and affinity key 
these methods try to locate the node on which the key resides on Ignite the 
specified Ignite cache, and then execute the job there. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Integer, String> cache = 
ignite.cache(CACHE_NAME);\n\nIgniteCompute compute = ignite.compute();\n\nfor 
(int key = 0; key < KEY_CNT; key++) {\n    // This closure will execute on the 
remote node where\n    // data with the 'key' is located.\n    
compute.affinityRun(CACHE_NAME, key, () -> { \n        // Peek is a local 
memory lookup.\n        System.out.println(\"Co-located [key= \" + key + \", 
value= \" + cache.peek(key) +']');\n    });\n}",
-      "language": "java",
-      "name": "affinityRun"
-    },
-    {
-      "code": "IgniteCache<Integer, String> cache = 
ignite.cache(CACHE_NAME);\n\nIgniteCompute asyncCompute = 
ignite.compute().withAsync();\n\nList<IgniteFuture<?>> futs = new 
ArrayList<>();\n\nfor (int key = 0; key < KEY_CNT; key++) {\n    // This 
closure will execute on the remote node where\n    // data with the 'key' is 
located.\n    asyncCompute.affinityRun(CACHE_NAME, key, () -> { \n        // 
Peek is a local memory lookup.\n        System.out.println(\"Co-located [key= 
\" + key + \", value= \" + cache.peek(key) +']');\n    });\n  \n    
futs.add(asyncCompute.future());\n}\n\n// Wait for all futures to 
complete.\nfuts.stream().forEach(IgniteFuture::get);",
-      "language": "java",
-      "name": "async affinityRun"
-    },
-    {
-      "code": "final IgniteCache<Integer, String> cache = 
ignite.cache(CACHE_NAME);\n\nIgniteCompute compute = ignite.compute();\n\nfor 
(int i = 0; i < KEY_CNT; i++) {\n    final int key = i;\n \n    // This closure 
will execute on the remote node where\n    // data with the 'key' is located.\n 
   compute.affinityRun(CACHE_NAME, key, new IgniteRunnable() {\n        
@Override public void run() {\n            // Peek is a local memory lookup.\n  
          System.out.println(\"Co-located [key= \" + key + \", value= \" + 
cache.peek(key) +']');\n        }\n    });\n}",
-      "language": "java",
-      "name": "java7 affinityRun"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/compute-grid/compute-grid.md
----------------------------------------------------------------------
diff --git a/docs/wiki/compute-grid/compute-grid.md 
b/docs/wiki/compute-grid/compute-grid.md
deleted file mode 100755
index 1a65400..0000000
--- a/docs/wiki/compute-grid/compute-grid.md
+++ /dev/null
@@ -1,56 +0,0 @@
-Distributed computations are performed in parallel fashion to gain **high 
performance**, **low latency**, and **linear scalability**. Ignite compute grid 
provides a set of simple APIs that allow users distribute computations and data 
processing across multiple computers in the cluster. Distributed parallel 
processing is based on the ability to take any computation and execute it on 
any set of cluster nodes and return the results back.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/zrJB0GshRdS3hLn0QGlI";,
-        "in_memory_compute.png",
-        "400",
-        "301",
-        "#da4204",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-##Features
-  * [Distributed Closure Execution](doc:distributed-closures)
-  * [MapReduce & ForkJoin Processing](doc:compute-tasks)
-  * [Clustered Executor Service](doc:executor-service)
-  * [Collocation of Compute and Data](doc:collocate-compute-and-data) 
-  * [Load Balancing](doc:load-balancing) 
-  * [Fault Tolerance](doc:fault-tolerance)
-  * [Job State Checkpointing](doc:checkpointing) 
-  * [Job Scheduling](doc:job-scheduling) 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCompute"
-}
-[/block]
-`IgniteCompute` interface provides methods for running many types of 
computations over nodes in a cluster or a cluster group. These methods can be 
used to execute Tasks or Closures in distributed fashion.
-
-All jobs and closures are [guaranteed to be executed](doc:fault-tolerance) as 
long as there is at least one node standing. If a job execution is rejected due 
to lack of resources, a failover mechanism is provided. In case of failover, 
the load balancer picks the next available node to execute the job. Here is how 
you can get an `IgniteCompute` instance:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Get compute instance 
over all nodes in the cluster.\nIgniteCompute compute = ignite.compute();",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-You can also limit the scope of computations to a [Cluster 
Group](doc:cluster-groups). In this case, computation will only execute on the 
nodes within the cluster group.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignitition.ignite();\n\nClusterGroup 
remoteGroup = ignite.cluster().forRemotes();\n\n// Limit computations only to 
remote nodes (exclude local node).\nIgniteCompute compute = 
ignite.compute(remoteGroup);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/compute-grid/compute-tasks.md
----------------------------------------------------------------------
diff --git a/docs/wiki/compute-grid/compute-tasks.md 
b/docs/wiki/compute-grid/compute-tasks.md
deleted file mode 100755
index 77f045d..0000000
--- a/docs/wiki/compute-grid/compute-tasks.md
+++ /dev/null
@@ -1,105 +0,0 @@
-`ComputeTask` is the Ignite abstraction for the simplified in-memory 
MapReduce, which is also very close to ForkJoin paradigm. Pure MapReduce was 
never built for performance and only works well when dealing with off-line 
batch oriented processing (e.g. Hadoop MapReduce). However, when computing on 
data that resides in-memory, real-time low latencies and high throughput 
usually take the highest priority. Also, simplicity of the API becomes very 
important as well. With that in mind, Ignite introduced the `ComputeTask` API, 
which is a light-weight MapReduce (or ForkJoin) implementation.
-[block:callout]
-{
-  "type": "info",
-  "body": "Use `ComputeTask` only when you need fine-grained control over the 
job-to-node mapping, or custom fail-over logic. For all other cases you should 
use simple closure executions on the cluster documented in [Distributed 
Computations](doc:compute) section."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "ComputeTask"
-}
-[/block]
-`ComputeTask` defines jobs to execute on the cluster, and the mappings of 
those jobs to nodes. It also defines how to process (reduce) the job results. 
All `IgniteCompute.execute(...)` methods execute the given task on the grid. 
User applications should implement `map(...)` and `reduce(...)` methods of 
ComputeTask interface.
-
-Tasks are defined by implementing the 2 or 3 methods on `ComputeTask` interface
-
-##Map Method
-Method `map(...)` instantiates the jobs and maps them to worker nodes. The 
method receives the collection of cluster nodes on which the task is run and 
the task argument. The method should return a map with jobs as keys and mapped 
worker nodes as values. The jobs are then sent to the mapped nodes and executed 
there.
-[block:callout]
-{
-  "type": "info",
-  "body": "Refer to [ComputeTaskSplitAdapter](#computetasksplitadapter) for 
simplified implementation of the `map(...)` method."
-}
-[/block]
-##Result Method
-Method `result(...)` is called each time a job completes on some cluster node. 
It receives the result returned by the completed job, as well as the list of 
all the job results received so far. The method should return a 
`ComputeJobResultPolicy` instance, indicating what to do next:
-  * `WAIT` - wait for all remaining jobs to complete (if any)
-  * `REDUCE` - immediately move to reduce step, discarding all the remaining 
jobs and unreceived yet results
-  * `FAILOVER` - failover the job to another node (see Fault Tolerance)
-All the received job results will be available in the `reduce(...)` method as 
well.
-
-##Reduce Method
-Method `reduce(...)` is called on reduce step, when all the jobs have 
completed (or REDUCE result policy was returned from the `result(...)` method). 
The method receives a list with all the completed results and should return a 
final result of the computation. 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Compute Task Adapters"
-}
-[/block]
-It is not necessary to implement all 3 methods of the `ComputeTask` API each 
time you need to define a computation. There is a number of helper classes that 
let you describe only a particular piece of your logic, leaving out all the 
rest to Ignite to handle automatically. 
-
-##ComputeTaskAdapter
-`ComputeTaskAdapter` defines a default implementation of the `result(...)` 
method which returns `FAILOVER` policy if a job threw an exception and `WAIT` 
policy otherwise, thus waiting for all jobs to finish with a result.
-
-##ComputeTaskSplitAdapter
-`ComputeTaskSplitAdapter` extends `ComputeTaskAdapter` and adds capability to 
automatically assign jobs to nodes. It hides the `map(...)` method and adds a 
new `split(...)` method in which user only needs to provide a collection of the 
jobs to be executed (the mapping of those jobs to nodes will be handled 
automatically by the adapter in a load-balanced fashion). 
-
-This adapter is especially useful in homogeneous environments where all nodes 
are equally suitable for executing jobs and the mapping step can be done 
implicitly.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "ComputeJob"
-}
-[/block]
-All jobs that are spawned by a task are implementations of the `ComputeJob` 
interface. The `execute()` method of this interface defines the job logic and 
should return a job result. The `cancel()` method defines the logic in case if 
the job is discarded (for example, in case when task decides to reduce 
immediately or to cancel).
-
-##ComputeJobAdapter
-Convenience adapter which provides a no-op implementation of the `cancel()` 
method.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Example"
-}
-[/block]
-Here is an example of `ComputeTask` and `ComputeJob` implementations.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = ignite.compute();\n\n// Execute task on 
the clustr and wait for its completion.\nint cnt = 
grid.compute().execute(CharacterCountTask.class, \"Hello Grid Enabled 
World!\");\n \nSystem.out.println(\">>> Total number of characters in the 
phrase is '\" + cnt + \"'.\");\n \n/**\n * Task to count non-white-space 
characters in a phrase.\n */\nprivate static class CharacterCountTask extends 
ComputeTaskSplitAdapter<String, Integer> {\n  // 1. Splits the received string 
into to words\n  // 2. Creates a child job for each word\n  // 3. Sends created 
jobs to other nodes for processing. \n  @Override \n  public List<ClusterNode> 
split(List<ClusterNode> subgrid, String arg) {\n    String[] words = 
arg.split(\" \");\n\n    List<ComputeJob> jobs = new 
ArrayList<>(words.length);\n\n    for (final String word : arg.split(\" \")) 
{\n      jobs.add(new ComputeJobAdapter() {\n        @Override public Object 
execute() {\n          System.out.println(\">>> Printing '
 \" + word + \"' on from compute job.\");\n\n          // Return number of 
letters in the word.\n          return word.length();\n        }\n      });\n   
 }\n\n    return jobs;\n  }\n\n  @Override \n  public Integer 
reduce(List<ComputeJobResult> results) {\n    int sum = 0;\n\n    for 
(ComputeJobResult res : results)\n      sum += res.<Integer>getData();\n\n    
return sum;\n  }\n}",
-      "language": "java",
-      "name": "ComputeTaskSplitAdapter"
-    },
-    {
-      "code": "IgniteCompute compute = ignite.compute();\n\n// Execute task on 
the clustr and wait for its completion.\nint cnt = 
grid.compute().execute(CharacterCountTask.class, \"Hello Grid Enabled 
World!\");\n \nSystem.out.println(\">>> Total number of characters in the 
phrase is '\" + cnt + \"'.\");\n \n/**\n * Task to count non-white-space 
characters in a phrase.\n */\nprivate static class CharacterCountTask extends 
ComputeTaskAdapter<String, Integer> {\n    // 1. Splits the received string 
into to words\n    // 2. Creates a child job for each word\n    // 3. Sends 
created jobs to other nodes for processing. \n    @Override \n    public Map<? 
extends ComputeJob, ClusterNode> map(List<ClusterNode> subgrid, String arg) {\n 
       String[] words = arg.split(\" \");\n      \n        Map<ComputeJob, 
ClusterNode> map = new HashMap<>(words.length);\n        \n        
Iterator<ClusterNode> it = subgrid.iterator();\n         \n        for (final 
String word : arg.split(\" \")) {\n      
       // If we used all nodes, restart the iterator.\n            if 
(!it.hasNext())\n                it = subgrid.iterator();\n             \n      
      ClusterNode node = it.next();\n                \n            map.put(new 
ComputeJobAdapter() {\n                @Override public Object execute() {\n    
                System.out.println(\">>> Printing '\" + word + \"' on this node 
from grid job.\");\n                  \n                    // Return number of 
letters in the word.\n                    return word.length();\n               
 }\n             }, node);\n        }\n      \n        return map;\n    }\n \n  
  @Override \n    public Integer reduce(List<ComputeJobResult> results) {\n     
   int sum = 0;\n      \n        for (ComputeJobResult res : results)\n         
   sum += res.<Integer>getData();\n      \n        return sum;\n    }\n}",
-      "language": "java",
-      "name": "ComputeTaskAdapter"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Distributed Task Session"
-}
-[/block]
-Distributed task session is created for every task execution. It is defined by 
`ComputeTaskSession interface. Task session is visible to the task and all the 
jobs spawned by it, so attributes set on a task or on a job can be accessed on 
other jobs.  Task session also allows to receive notifications when attributes 
are set or wait for an attribute to be set.
-
-The sequence in which session attributes are set is consistent across the task 
and all job siblings within it. There will never be a case when one job sees 
attribute A before attribute B, and another job sees attribute B before A.
-
-In the example below, we have all jobs synchronize on STEP1 before moving on 
to STEP2. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = 
ignite.commpute();\n\ncompute.execute(new ComputeTasSplitAdapter<Object, 
Object>() {\n  @Override \n  protected Collection<? extends GridJob> split(int 
gridSize, Object arg)  {\n    Collection<ComputeJob> jobs = new 
LinkedList<>();\n\n    // Generate jobs by number of nodes in the grid.\n    
for (int i = 0; i < gridSize; i++) {\n      jobs.add(new ComputeJobAdapter(arg) 
{\n        // Auto-injected task session.\n        @TaskSessionResource\n       
 private ComputeTaskSession ses;\n        \n        // Auto-injected job 
context.\n        @JobContextResource\n        private ComputeJobContext 
jobCtx;\n\n        @Override \n        public Object execute() {\n          // 
Perform STEP1.\n          ...\n          \n          // Tell other jobs that 
STEP1 is complete.\n          ses.setAttribute(jobCtx.getJobId(), \"STEP1\");\n 
         \n          // Wait for other jobs to complete STEP1.\n          for 
(ComputeJobSibling sibling : ses.getJobSiblin
 gs())\n            ses.waitForAttribute(sibling.getJobId(), \"STEP1\", 0);\n   
       \n          // Move on to STEP2.\n          ...\n        }\n      }\n    
}\n  }\n               \n  @Override \n  public Object 
reduce(List<ComputeJobResult> results) {\n    // No-op.\n    return null;\n  
}\n}, null);\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/compute-grid/distributed-closures.md
----------------------------------------------------------------------
diff --git a/docs/wiki/compute-grid/distributed-closures.md 
b/docs/wiki/compute-grid/distributed-closures.md
deleted file mode 100755
index 9df1011..0000000
--- a/docs/wiki/compute-grid/distributed-closures.md
+++ /dev/null
@@ -1,107 +0,0 @@
-Ignite compute grid allows to broadcast and load-balance any closure within 
the cluster or a cluster group, including plain Java `runnables` and 
`callables`.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Broadcast Methods"
-}
-[/block]
-All `broadcast(...)` methods broadcast a given job to all nodes in the cluster 
or cluster group. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast 
to remote nodes only.\nIgniteCompute compute = 
ignite.compute(ignite.cluster().forRemotes());\n\n// Print out hello message on 
remote nodes in the cluster group.\ncompute.broadcast(() -> 
System.out.println(\"Hello Node: \" + ignite.cluster().localNode().id()));",
-      "language": "java",
-      "name": "broadcast"
-    },
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast 
to remote nodes only and \n// enable asynchronous mode.\nIgniteCompute compute 
= ignite.compute(ignite.cluster().forRemotes()).withAsync();\n\n// Print out 
hello message on remote nodes in the cluster group.\ncompute.broadcast(() -> 
System.out.println(\"Hello Node: \" + 
ignite.cluster().localNode().id()));\n\nComputeTaskFuture<?> fut = 
compute.future():\n\nfut.listenAsync(f -> System.out.println(\"Finished sending 
broadcast job.\"));",
-      "language": "java",
-      "name": "async broadcast"
-    },
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast 
to rmeote nodes only.\nIgniteCompute compute = 
ignite.compute(ignite.cluster.forRemotes());\n\n// Print out hello message on 
remote nodes in projection.\ncompute.broadcast(\n    new IgniteRunnable() {\n   
     @Override public void run() {\n            // Print ID of remote node on 
remote node.\n            System.out.println(\">>> Hello Node: \" + 
ignite.cluster().localNode().id());\n        }\n    }\n);",
-      "language": "java",
-      "name": "java7 broadcast"
-    },
-    {
-      "code": "final Ignite ignite = Ignition.ignite();\n\n// Limit broadcast 
to remote nodes only and \n// enable asynchronous mode.\nIgniteCompute compute 
= ignite.compute(ignite.cluster.forRemotes()).withAsync();\n\n// Print out 
hello message on remote nodes in the cluster group.\ncompute.broadcast(\n    
new IgniteRunnable() {\n        @Override public void run() {\n            // 
Print ID of remote node on remote node.\n            System.out.println(\">>> 
Hello Node: \" + ignite.cluster().localNode().id());\n        }\n    
}\n);\n\nComputeTaskFuture<?> fut = compute.future():\n\nfut.listenAsync(new 
IgniteInClosure<? super ComputeTaskFuture<?>>() {\n    public void 
apply(ComputeTaskFuture<?> fut) {\n        System.out.println(\"Finished 
sending broadcast job to cluster.\");\n    }\n});",
-      "language": "java",
-      "name": "java7 async broadcast"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Call and Run Methods"
-}
-[/block]
-All `call(...)` and `run(...)` methods execute either individual jobs or 
collections of jobs on the cluster or a cluster group.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Collection<IgniteCallable<Integer>> calls = new 
ArrayList<>();\n \n// Iterate through all words in the sentence and create 
callable jobs.\nfor (String word : \"Count characters using callable\".split(\" 
\"))\n    calls.add(word::length);\n\n// Execute collection of callables on the 
cluster.\nCollection<Integer> res = ignite.compute().call(calls);\n\n// Add all 
the word lengths received from cluster nodes.\nint total = 
res.stream().mapToInt(Integer::intValue).sum(); ",
-      "language": "java",
-      "name": "call"
-    },
-    {
-      "code": "IgniteCompute compute = Ignite.compute();\n\n// Iterate through 
all words and print \n// each word on a different cluster node.\nfor (String 
word : \"Print words on different cluster nodes\".split(\" \"))\n    // Run on 
some cluster node.\n    compute.run(() -> System.out.println(word));",
-      "language": "java",
-      "name": "run"
-    },
-    {
-      "code": "Collection<IgniteCallable<Integer>> calls = new 
ArrayList<>();\n \n// Iterate through all words in the sentence and create 
callable jobs.\nfor (String word : \"Count characters using callable\".split(\" 
\"))\n    calls.add(word::length);\n\n// Enable asynchronous 
mode.\nIgniteCompute asyncCompute = ignite.compute().withAsync();\n\n// 
Asynchronously execute collection of callables on the 
cluster.\nasyncCompute.call(calls);\n\nasyncCompute.future().listenAsync(fut -> 
{\n    // Total number of characters.\n    int total = 
fut.get().stream().mapToInt(Integer::intValue).sum(); \n  \n    
System.out.println(\"Total number of characters: \" + total);\n});",
-      "language": "java",
-      "name": "async call"
-    },
-    {
-      "code": "IgniteCompute asyncCompute = 
ignite.compute().withAsync();\n\nCollection<ComputeTaskFuture<?>> futs = new 
ArrayList<>();\n\n// Iterate through all words and print \n// each word on a 
different cluster node.\nfor (String word : \"Print words on different cluster 
nodes\".split(\" \")) {\n    // Asynchronously run on some cluster node.\n    
asyncCompute.run(() -> System.out.println(word));\n\n    
futs.add(asyncCompute.future());\n}\n\n// Wait for completion of all 
futures.\nfuts.stream().forEach(ComputeTaskFuture::get);",
-      "language": "java",
-      "name": "async run"
-    },
-    {
-      "code": "Collection<IgniteCallable<Integer>> calls = new 
ArrayList<>();\n \n// Iterate through all words in the sentence and create 
callable jobs.\nfor (final String word : \"Count characters using 
callable\".split(\" \")) {\n    calls.add(new GridCallable<Integer>() {\n       
 @Override public Integer call() throws Exception {\n            return 
word.length(); // Return word length.\n        }\n    });\n}\n \n// Execute 
collection of callables on the cluster.\nCollection<Integer> res = 
ignite.compute().call(calls);\n\nint total = 0;\n\n// Total number of 
characters.\n// Looks much better in Java 8.\nfor (Integer i : res)\n  total += 
i;",
-      "language": "java",
-      "name": "java7 call"
-    },
-    {
-      "code": "IgniteCompute asyncCompute = 
ignite.compute().withAsync();\n\nCollection<ComputeTaskFuture<?>> futs = new 
ArrayList<>();\n\n// Iterate through all words and print\n// each word on a 
different cluster node.\nfor (String word : \"Print words on different cluster 
nodes\".split(\" \")) {\n    // Asynchronously run on some cluster node.\n    
asyncCompute.run(new IgniteRunnable() {\n        @Override public void run() 
{\n            System.out.println(word);\n        }\n    });\n\n    
futs.add(asyncCompute.future());\n}\n\n// Wait for completion of all 
futures.\nfor (ComputeTaskFuture<?> f : futs)\n  f.get();",
-      "language": "java",
-      "name": "java7 async run"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Apply Methods"
-}
-[/block]
-A closure is a block of code that encloses its body and any outside variables 
used inside of it as a function object. You can then pass such function object 
anywhere you can pass a variable and execute it. All apply(...) methods execute 
closures on the cluster. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute  = ignite.compute();\n\n// Execute 
closure on all cluster nodes.\nCollection<Integer> res = 
ignite.compute().apply(\n    String::length,\n    Arrays.asList(\"Count 
characters using closure\".split(\" \"))\n);\n     \n// Add all the word 
lengths received from cluster nodes.\nint total = 
res.stream().mapToInt(Integer::intValue).sum(); ",
-      "language": "java",
-      "name": "apply"
-    },
-    {
-      "code": "// Enable asynchronous mode.\nIgniteCompute asyncCompute = 
ignite.compute().withAsync();\n\n// Execute closure on all cluster nodes.\n// 
If the number of closures is less than the number of \n// parameters, then 
Ignite will create as many closures \n// as there are 
parameters.\nCollection<Integer> res = ignite.compute().apply(\n    
String::length,\n    Arrays.asList(\"Count characters using closure\".split(\" 
\"))\n);\n     \nasyncCompute.future().listenAsync(fut -> {\n    // Total 
number of characters.\n    int total = 
fut.get().stream().mapToInt(Integer::intValue).sum(); \n  \n    
System.out.println(\"Total number of characters: \" + total);\n});",
-      "language": "java",
-      "name": "async apply"
-    },
-    {
-      "code": "// Execute closure on all cluster nodes.\n// If the number of 
closures is less than the number of \n// parameters, then Ignite will create as 
many closures \n// as there are parameters.\nCollection<Integer> res = 
ignite.compute().apply(\n    new IgniteClosure<String, Integer>() {\n        
@Override public Integer apply(String word) {\n            // Return number of 
letters in the word.\n            return word.length();\n        }\n    },\n    
Arrays.asList(\"Count characters using closure\".split(\" \"))\n).get();\n     
\nint sum = 0;\n \n// Add up individual word lengths received from remote 
nodes\nfor (int len : res)\n    sum += len;",
-      "language": "java",
-      "name": "java7 apply"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/compute-grid/executor-service.md
----------------------------------------------------------------------
diff --git a/docs/wiki/compute-grid/executor-service.md 
b/docs/wiki/compute-grid/executor-service.md
deleted file mode 100755
index 6c894f9..0000000
--- a/docs/wiki/compute-grid/executor-service.md
+++ /dev/null
@@ -1,23 +0,0 @@
-[IgniteCompute](doc:compute) provides a convenient API for executing 
computations on the cluster. However, you can also work directly with standard 
`ExecutorService` interface from JDK. Ignite provides a cluster-enabled 
implementation of `ExecutorService` and automatically executes all the 
computations in load-balanced fashion within the cluster. Your computations 
also become fault-tolerant and are guaranteed to execute as long as there is at 
least one node left. You can think of it as a distributed cluster-enabled 
thread pool. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Get cluster-enabled executor service.\nExecutorService exec 
= ignite.executorService();\n \n// Iterate through all words in the sentence 
and create jobs.\nfor (final String word : \"Print words using 
runnable\".split(\" \")) {\n  // Execute runnable on some node.\n  
exec.submit(new IgniteRunnable() {\n    @Override public void run() {\n      
System.out.println(\">>> Printing '\" + word + \"' on this node from grid 
job.\");\n    }\n  });\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
- 
-You can also limit the job execution with some subset of nodes from your grid:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "// Cluster group for nodes where the attribute 'worker' is 
defined.\nClusterGroup workerGrp = ignite.cluster().forAttribute(\"ROLE\", 
\"worker\");\n\n// Get cluster-enabled executor service for the above cluster 
group.\nExecutorService exec = icnite.executorService(workerGrp);\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/compute-grid/fault-tolerance.md
----------------------------------------------------------------------
diff --git a/docs/wiki/compute-grid/fault-tolerance.md 
b/docs/wiki/compute-grid/fault-tolerance.md
deleted file mode 100755
index d230f79..0000000
--- a/docs/wiki/compute-grid/fault-tolerance.md
+++ /dev/null
@@ -1,79 +0,0 @@
-Ignite supports automatic job failover. In case of a node crash, jobs are 
automatically transferred to other available nodes for re-execution. However, 
in Ignite you can also treat any job result as a failure as well. The worker 
node can still be alive, but it may be running low on CPU, I/O, disk space, 
etc. There are many conditions that may result in a failure within your 
application and you can trigger a failover. Moreover, you have the ability to 
choose to which node a job should be failed over to, as it could be different 
for different applications or different computations within the same 
application.
-
-The `FailoverSpi` is responsible for handling the selection of a new node for 
the execution of a failed job. `FailoverSpi` inspects the failed job and the 
list of all available grid nodes on which the job execution can be retried. It 
ensures that the job is not re-mapped to the same node it had failed on. 
Failover is triggered when the method `ComputeTask.result(...)` returns the 
`ComputeJobResultPolicy.FAILOVER` policy. Ignite comes with a number of 
built-in customizable Failover SPI implementations.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "At Least Once Guarantee"
-}
-[/block]
-As long as there is at least one node standing, no job will ever be lost.
-
-By default, Ignite will failover all jobs from stopped or crashed nodes 
automatically. For custom failover behavior, you should implement 
`ComputeTask.result()` method. The example below triggers failover whenever a 
job throws any `IgniteException` (or its subclasses):
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class MyComputeTask extends 
ComputeTaskSplitAdapter<String, String> {\n    ...\n      \n    @Override \n    
public ComputeJobResultPolicy result(ComputeJobResult res, 
List<ComputeJobResult> rcvd) {\n        IgniteException err = 
res.getException();\n     \n        if (err != null)\n            return 
ComputeJobResultPolicy.FAILOVER;\n    \n        // If there is no exception, 
wait for all job results.\n        return ComputeJobResultPolicy.WAIT;\n    }\n 
 \n    ...\n}\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Closure Failover"
-}
-[/block]
-Closure failover is by default governed by `ComputeTaskAdapter`, which is 
triggered if a remote node either crashes or rejects closure execution. This 
default behavior may be overridden by using `IgniteCompute.withNoFailover()` 
method, which creates an instance of `IgniteCompute` with a **no-failover 
flag** set on it . Here is an example:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCompute compute = 
ignite.compute().withNoFailover();\n\ncompute.apply(() -> {\n    // Do 
something\n    ...\n}, \"Some argument\");\n",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "AlwaysFailOverSpi"
-}
-[/block]
-`AlwaysFailoverSpi` always reroutes a failed job to another node. Note, that 
at first an attempt will be made to reroute the failed job to a node that the 
task was not executed on. If no such nodes are available, then an attempt will 
be made to reroute the failed job to the nodes that may be running other jobs 
from the same task. If none of the above attempts succeeded, then the job will 
not be failed over and null will be returned.
-
-The following configuration parameters can be used to configure 
`AlwaysFailoverSpi`.
-[block:parameters]
-{
-  "data": {
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-0": "`setMaximumFailoverAttempts(int)`",
-    "0-1": "Sets the maximum number of attempts to fail-over a failed job to 
other nodes.",
-    "0-2": "5"
-  },
-  "cols": 3,
-  "rows": 1
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean id=\"grid.custom.cfg\" 
class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  
<bean class=\"org.apache.ignite.spi.failover.always.AlwaysFailoverSpi\">\n    
<property name=\"maximumFailoverAttempts\" value=\"5\"/>\n  </bean>\n  
...\n</bean>\n",
-      "language": "xml"
-    },
-    {
-      "code": "AlwaysFailoverSpi failSpi = new AlwaysFailoverSpi();\n 
\nIgniteConfiguration cfg = new IgniteConfiguration();\n \n// Override maximum 
failover attempts.\nfailSpi.setMaximumFailoverAttempts(5);\n \n// Override the 
default failover SPI.\ncfg.setFailoverSpi(failSpi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/compute-grid/job-scheduling.md
----------------------------------------------------------------------
diff --git a/docs/wiki/compute-grid/job-scheduling.md 
b/docs/wiki/compute-grid/job-scheduling.md
deleted file mode 100755
index 81f5132..0000000
--- a/docs/wiki/compute-grid/job-scheduling.md
+++ /dev/null
@@ -1,69 +0,0 @@
-In Ignite, jobs are mapped to cluster nodes during initial task split or 
closure execution on the  client side. However, once jobs arrive to the 
designated nodes, then need to be ordered for execution. By default, jobs are 
submitted to a thread pool and are executed in random order.  However, if you 
need to have a fine-grained control over job ordering, you can enable 
`CollisionSpi`.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "FIFO Ordering"
-}
-[/block]
-`FifoQueueCollisionSpi` allows a certain number of jobs in first-in first-out 
order to proceed without interruptions. All other jobs will be put on a waiting 
list until their turn.
-
-Number of parallel jobs is controlled by `parallelJobsNumber` configuration 
parameter. Default is number of cores times 2.
-
-##One at a Time
-Note that by setting `parallelJobsNumber` to 1, you can guarantee that all 
jobs will be executed one-at-a-time, and no two jobs will be executed 
concurrently.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\" 
singleton=\"true\">\n  ...\n  <property name=\"collisionSpi\">\n    <bean 
class=\"org.apache.ignite.spi.collision.fifoqueue.FifoQueueCollisionSpi\">\n    
  <!-- Execute one job at a time. -->\n      <property 
name=\"parallelJobsNumber\" value=\"1\"/>\n    </bean>\n  </property>\n  
...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "FifoQueueCollisionSpi colSpi = new FifoQueueCollisionSpi();\n 
\n// Execute jobs sequentially, one at a time, \n// by setting parallel job 
number to 1.\ncolSpi.setParallelJobsNumber(1);\n \nIgniteConfiguration cfg = 
new IgniteConfiguration();\n \n// Override default collision 
SPI.\ncfg.setCollisionSpi(colSpi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java",
-      "name": null
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Priority Ordering"
-}
-[/block]
-`PriorityQueueCollisionSpi` allows to assign priorities to individual jobs, so 
jobs with higher priority will be executed ahead of lower priority jobs. 
-
-#Task Priorities
-Task priorities are set in the [task 
session](/docs/compute-tasks#distributed-task-session) via `grid.task.priority` 
attribute. If no priority has been assigned to a task, then default priority of 
0 is used.
-
-Below is an example showing how task priority can be set. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class MyUrgentTask extends 
ComputeTaskSplitAdapter<Object, Object> {\n  // Auto-injected task session.\n  
@TaskSessionResource\n  private GridTaskSession taskSes = null;\n \n  
@Override\n  protected Collection<ComputeJob> split(int gridSize, Object arg) 
{\n    ...\n    // Set high task priority.\n    
taskSes.setAttribute(\"grid.task.priority\", 10);\n \n    List<ComputeJob> jobs 
= new ArrayList<>(gridSize);\n    \n    for (int i = 1; i <= gridSize; i++) {\n 
     jobs.add(new GridJobAdapter() {\n        ...\n      });\n    }\n    ...\n  
    \n    // These jobs will be executed with higher priority.\n    return 
jobs;\n  }\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-Just like with [FIFO Ordering](#fifo-ordering), number of parallel jobs is 
controlled by `parallelJobsNumber` configuration parameter. 
-
-##Configuration
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean class=\"org.apache.ignite.IgniteConfiguration\" 
singleton=\"true\">\n\t...\n\t<property name=\"collisionSpi\">\n\t\t<bean 
class=\"org.apache.ignite.spi.collision.priorityqueue.PriorityQueueCollisionSpi\">\n
      <!-- \n        Change the parallel job number if needed.\n        Default 
is number of cores times 2.\n      -->\n\t\t\t<property 
name=\"parallelJobsNumber\" 
value=\"5\"/>\n\t\t</bean>\n\t</property>\n\t...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "PriorityQueueCollisionSpi colSpi = new 
PriorityQueueCollisionSpi();\n\n// Change the parallel job number if 
needed.\n// Default is number of cores times 
2.\ncolSpi.setParallelJobsNumber(5);\n \nIgniteConfiguration cfg = new 
IgniteConfiguration();\n \n// Override default collision 
SPI.\ncfg.setCollisionSpi(colSpi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java",
-      "name": ""
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/compute-grid/load-balancing.md
----------------------------------------------------------------------
diff --git a/docs/wiki/compute-grid/load-balancing.md 
b/docs/wiki/compute-grid/load-balancing.md
deleted file mode 100755
index 19814ff..0000000
--- a/docs/wiki/compute-grid/load-balancing.md
+++ /dev/null
@@ -1,59 +0,0 @@
-Load balancing component balances job distribution among cluster nodes. In 
Ignite load balancing is achieved via `LoadBalancingSpi` which controls load on 
all nodes and makes sure that every node in the cluster is equally loaded. In 
homogeneous environments with homogeneous tasks load balancing is achieved by 
random or round-robin policies. However, in many other use cases, especially 
under uneven load, more complex adaptive load-balancing policies may be needed.
-[block:callout]
-{
-  "type": "info",
-  "body": "Note that load balancing is triggered whenever your jobs are not 
collocated with data or have no real preference on which node to execute. If 
[Collocation Of Compute and Data](doc:collocate-compute-and-data) is used, then 
data affinity takes priority over load balancing.",
-  "title": "Data Affinity"
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Round-Robin Load Balancing"
-}
-[/block]
-`RoundRobinLoadBalancingSpi` iterates through nodes in round-robin fashion and 
picks the next sequential node. Two modes of operation are supported: per-task 
and global.
-
-##Per-Task Mode
-When configured in per-task mode, implementation will pick a random node at 
the beginning of every task execution and then sequentially iterate through all 
nodes in topology starting from the picked node. This is the default 
configuration For cases when split size is equal to the number of nodes, this 
mode guarantees that all nodes will participate in the split.
-
-##Global Mode
-When configured in global mode, a single sequential queue of nodes is 
maintained for all tasks and the next node in the queue is picked every time. 
In this mode (unlike in per-task mode) it is possible that even if split size 
may be equal to the number of nodes, some jobs within the same task will be 
assigned to the same node whenever multiple tasks are executing concurrently.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean id=\"grid.custom.cfg\" 
class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  
<property name=\"loadBalancingSpi\">\n    <bean 
class=\"org.apache.ignite.spi.loadbalancing.roundrobin.RoundRobinLoadBalancingSpi\">\n
      <!-- Set to per-task round-robin mode (this is default behavior). -->\n   
   <property name=\"perTask\" value=\"true\"/>\n    </bean>\n  </property>\n  
...\n</bean>",
-      "language": "xml",
-      "name": null
-    },
-    {
-      "code": "RoundRobinLoadBalancingSpi = new 
RoundRobinLoadBalancingSpi();\n \n// Configure SPI to use per-task mode (this 
is default behavior).\nspi.setPerTask(true);\n \nIgniteConfiguration cfg = new 
IgniteConfiguration();\n \n// Override default load balancing 
SPI.\ncfg.setLoadBalancingSpi(spi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Random and Weighted Load Balancing"
-}
-[/block]
-`WeightedRandomLoadBalancingSpi` picks a random node for job execution by 
default. You can also optionally assign weights to nodes, so nodes with larger 
weights will end up getting proportionally more jobs routed to them. By default 
all nodes get equal weight of 10.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean id=\"grid.custom.cfg\" 
class=\"org.apache.ignite.IgniteConfiguration\" singleton=\"true\">\n  ...\n  
<property name=\"loadBalancingSpi\">\n    <bean 
class=\"org.apache.ignite.spi.loadbalancing.weightedrandom.WeightedRandomLoadBalancingSpi\">\n
      <property name=\"useWeights\" value=\"true\"/>\n      <property 
name=\"nodeWeight\" value=\"10\"/>\n    </bean>\n  </property>\n  ...\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "WeightedRandomLoadBalancingSpi = new 
WeightedRandomLoadBalancingSpi();\n \n// Configure SPI to used weighted random 
load balancing.\nspi.setUseWeights(true);\n \n// Set weight for the local 
node.\nspi.setWeight(10);\n \nIgniteConfiguration cfg = new 
IgniteConfiguration();\n \n// Override default load balancing 
SPI.\ncfg.setLoadBalancingSpi(spi);\n \n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/data-grid/affinity-collocation.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/affinity-collocation.md 
b/docs/wiki/data-grid/affinity-collocation.md
deleted file mode 100755
index 77266e8..0000000
--- a/docs/wiki/data-grid/affinity-collocation.md
+++ /dev/null
@@ -1,78 +0,0 @@
-Given that the most common ways to cache data is in `PARTITIONED` caches, 
collocating compute with data or data with data can significantly improve 
performance and scalability of your application.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Collocate Data with Data"
-}
-[/block]
-In many cases it is beneficial to colocate different cache keys together if 
they will be accessed together. Quite often your business logic will require 
access to more than one cache key. By collocating them together you can make 
sure that all keys with the same `affinityKey` will be cached on the same 
processing node, hence avoiding costly network trips to fetch data from remote 
nodes.
-
-For example, let's say you have `Person` and `Company` objects and you want to 
collocate `Person` objects with `Company` objects for which this person works. 
To achieve that, cache key used to cache `Person` objects should have a field 
or method annotated with `@CacheAffinityKeyMapped` annotation, which will 
provide the value of the company key for collocation. For convenience, you can 
also optionally use `CacheAffinityKey` class
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class PersonKey {\n    // Person ID used to identify a 
person.\n    private String personId;\n \n    // Company ID which will be used 
for affinity.\n    @GridCacheAffinityKeyMapped\n    private String companyId;\n 
   ...\n}\n\n// Instantiate person keys with the same company ID which is used 
as affinity key.\nObject personKey1 = new PersonKey(\"myPersonId1\", 
\"myCompanyId\");\nObject personKey2 = new PersonKey(\"myPersonId2\", 
\"myCompanyId\");\n \nPerson p1 = new Person(personKey1, ...);\nPerson p2 = new 
Person(personKey2, ...);\n \n// Both, the company and the person objects will 
be cached on the same node.\ncache.put(\"myCompanyId\", new 
Company(..));\ncache.put(personKey1, p1);\ncache.put(personKey2, p2);",
-      "language": "java",
-      "name": "using PersonKey"
-    },
-    {
-      "code": "Object personKey1 = new CacheAffinityKey(\"myPersonId1\", 
\"myCompanyId\");\nObject personKey2 = new CacheAffinityKey(\"myPersonId2\", 
\"myCompanyId\");\n \nPerson p1 = new Person(personKey1, ...);\nPerson p2 = new 
Person(personKey2, ...);\n \n// Both, the company and the person objects will 
be cached on the same node.\ncache.put(\"myCompanyId\", new 
Company(..));\ncache.put(personKey1, p1);\ncache.put(personKey2, p2);",
-      "language": "java",
-      "name": "using CacheAffinityKey"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "title": "SQL Joins",
-  "body": "When performing [SQL distributed 
joins](/docs/cache-queries#sql-queries) over data residing in partitioned 
caches, you must make sure that the join-keys are collocated."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Collocating Compute with Data"
-}
-[/block]
-It is also possible to route computations to the nodes where the data is 
cached. This concept is known as Collocation Of Computations And Data. It 
allows to route whole units of work to a certain node. 
-
-To collocate compute with data you should use `IgniteCompute.affinityRun(...)` 
and `IgniteCompute.affinityCall(...)` methods.
-
-Here is how you can collocate your computation with the same cluster node on 
which company and persons from the example above are cached.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "String companyId = \"myCompanyId\";\n \n// Execute Runnable on 
the node where the key is cached.\nignite.compute().affinityRun(\"myCache\", 
companyId, () -> {\n  Company company = cache.get(companyId);\n\n  // Since we 
collocated persons with the company in the above example,\n  // access to the 
persons objects is local.\n  Person person1 = cache.get(personKey1);\n  Person 
person2 = cache.get(personKey2);\n  ...  \n});",
-      "language": "java",
-      "name": "affinityRun"
-    },
-    {
-      "code": "final String companyId = \"myCompanyId\";\n \n// Execute 
Runnable on the node where the key is 
cached.\nignite.compute().affinityRun(\"myCache\", companyId, new 
IgniteRunnable() {\n  @Override public void run() {\n    Company company = 
cache.get(companyId);\n    \n    Person person1 = cache.get(personKey1);\n    
Person person2 = cache.get(personKey2);\n    ...\n  }\n};",
-      "language": "java",
-      "name": "java7 affinityRun"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCompute vs EntryProcessor"
-}
-[/block]
-Both, `IgniteCompute.affinityRun(...)` and `IgniteCache.invoke(...)` methods 
offer ability to collocate compute and data. The main difference is that 
`invoke(...)` methods is atomic and executes while holding a lock on a key. You 
should not access other keys from within the `EntryProcessor` logic as it may 
cause a deadlock. 
-
- `affinityRun(...)` and `affinityCall(...)`, on the other hand, do not hold 
any locks. For example, it is absolutely legal to start multiple transactions 
or execute cache queries from these methods without worrying about deadlocks. 
In this case Ignite will automatically detect that the processing is collocated 
and will employ a light-weight 1-Phase-Commit optimization for transactions 
(instead of 2-Phase-Commit).
-[block:callout]
-{
-  "type": "info",
-  "body": "See [JCache EntryProcessor](/docs/jcache#entryprocessor) 
documentation for more information about `IgniteCache.invoke(...)` method."
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/data-grid/automatic-db-integration.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/automatic-db-integration.md 
b/docs/wiki/data-grid/automatic-db-integration.md
deleted file mode 100755
index 15c4f4a..0000000
--- a/docs/wiki/data-grid/automatic-db-integration.md
+++ /dev/null
@@ -1,102 +0,0 @@
-Ignite supports integration with databases 'out-of-the-box' by 
`org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStore` class that implements 
`org.apache.ignite.cache.store.CacheStore` interface.
-
-Ignite provides utility that will read database metadata and generate POJO 
classes and XML configuration.
-
-Utility can be started by script:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "$ bin/ignite-schema-load.sh",
-      "language": "shell"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Connect to database"
-}
-[/block]
-JDBC drivers **are not supplied** with utility. You should download (and 
install if needed) appropriate JDBC driver for your database.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/jKCMIgmTi2uqSqgkgiTQ";,
-        "ignite-schema-load-01.png",
-        "650",
-        "650",
-        "#d6363a",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Generate XML configuration and POJO classes"
-}
-[/block]
-Select tables you want to map to POJO classes and click 'Generate' button.
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/13YM8mBRXaTB8yXWJkWI";,
-        "ignite-schema-load-02.png",
-        "650",
-        "650",
-        "",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Produced output"
-}
-[/block]
-Utility will generate POJO classes,  XML configuration, and java code snippet 
of cache configuration by code.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "/**\n * PersonKey definition.\n *\n * Code generated by Apache 
Ignite Schema Load utility: 03/03/2015.\n */\npublic class PersonKey implements 
Serializable {\n    /** */\n    private static final long serialVersionUID = 
0L;\n\n    /** Value for id. */\n    private int id;\n\n    /**\n     * Gets 
id.\n     *\n     * @return Value for id.\n     */\n    public int getId() {\n  
      return id;\n    }\n\n    /**\n     * Sets id.\n     *\n     * @param id 
New value for id.\n     */\n    public void setId(int id) {\n        this.id = 
id;\n    }\n\n    /** {@inheritDoc} */\n    @Override public boolean 
equals(Object o) {\n        if (this == o)\n            return true;\n\n        
if (!(o instanceof PersonKey))\n            return false;\n\n        PersonKey 
that = (PersonKey)o;\n\n        if (id != that.id)\n            return 
false;\n\n        return true;\n    }\n\n    /** {@inheritDoc} */\n    
@Override public int hashCode() {\n        int res = id;\n\n        return res
 ;\n    }\n\n    /** {@inheritDoc} */\n    @Override public String toString() 
{\n        return \"PersonKey [id=\" + id +\n            \"]\";\n    }\n}",
-      "language": "java",
-      "name": "POJO Key class"
-    },
-    {
-      "code": "/**\n * Person definition.\n *\n * Code generated by Apache 
Ignite Schema Load utility: 03/03/2015.\n */\npublic class Person implements 
Serializable {\n    /** */\n    private static final long serialVersionUID = 
0L;\n\n    /** Value for id. */\n    private int id;\n\n    /** Value for 
orgId. */\n    private Integer orgId;\n\n    /** Value for name. */\n    
private String name;\n\n    /**\n     * Gets id.\n     *\n     * @return Value 
for id.\n     */\n    public int getId() {\n        return id;\n    }\n\n    
/**\n     * Sets id.\n     *\n     * @param id New value for id.\n     */\n    
public void setId(int id) {\n        this.id = id;\n    }\n\n    /**\n     * 
Gets orgId.\n     *\n     * @return Value for orgId.\n     */\n    public 
Integer getOrgId() {\n        return orgId;\n    }\n\n    /**\n     * Sets 
orgId.\n     *\n     * @param orgId New value for orgId.\n     */\n    public 
void setOrgId(Integer orgId) {\n        this.orgId = orgId;\n    }\n\n    /**\n 
  
   * Gets name.\n     *\n     * @return Value for name.\n     */\n    public 
String getName() {\n        return name;\n    }\n\n    /**\n     * Sets name.\n 
    *\n     * @param name New value for name.\n     */\n    public void 
setName(String name) {\n        this.name = name;\n    }\n\n    /** 
{@inheritDoc} */\n    @Override public boolean equals(Object o) {\n        if 
(this == o)\n            return true;\n\n        if (!(o instanceof Person))\n  
          return false;\n\n        Person that = (Person)o;\n\n        if (id 
!= that.id)\n            return false;\n\n        if (orgId != null ? 
!orgId.equals(that.orgId) : that.orgId != null)\n            return false;\n\n  
      if (name != null ? !name.equals(that.name) : that.name != null)\n         
   return false;\n\n        return true;\n    }\n\n    /** {@inheritDoc} */\n   
 @Override public int hashCode() {\n        int res = id;\n\n        res = 31 * 
res + (orgId != null ? orgId.hashCode() : 0);\n\n        res = 31 * res + (
 name != null ? name.hashCode() : 0);\n\n        return res;\n    }\n\n    /** 
{@inheritDoc} */\n    @Override public String toString() {\n        return 
\"Person [id=\" + id +\n            \", orgId=\" + orgId +\n            \", 
name=\" + name +\n            \"]\";\n    }\n}",
-      "language": "java",
-      "name": "POJO Value class"
-    },
-    {
-      "code": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<beans 
xmlns=\"http://www.springframework.org/schema/beans\"\n       
xmlns:util=\"http://www.springframework.org/schema/util\"\n       
xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n       
xsi:schemaLocation=\"http://www.springframework.org/schema/beans\n              
             http://www.springframework.org/schema/beans/spring-beans.xsd\n     
                      http://www.springframework.org/schema/util\n              
             http://www.springframework.org/schema/util/spring-util.xsd\";>\n    
<bean class=\"org.apache.ignite.cache.CacheTypeMetadata\">\n        <property 
name=\"databaseSchema\" value=\"PUBLIC\"/>\n        <property 
name=\"databaseTable\" value=\"PERSON\"/>\n        <property name=\"keyType\" 
value=\"org.apache.ignite.examples.datagrid.store.model.PersonKey\"/>\n        
<property name=\"valueType\" 
value=\"org.apache.ignite.examples.datagrid.store.model.Person\"/>\n        
<property name=\"
 keyFields\">\n            <list>\n                <bean 
class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    
<property name=\"databaseName\" value=\"ID\"/>\n                    <property 
name=\"databaseType\">\n                        <util:constant 
static-field=\"java.sql.Types.INTEGER\"/>\n                    </property>\n    
                <property name=\"javaName\" value=\"id\"/>\n                    
<property name=\"javaType\" value=\"int\"/>\n                </bean>\n          
  </list>\n        </property>\n        <property name=\"valueFields\">\n       
     <list>\n                <bean 
class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    
<property name=\"databaseName\" value=\"ID\"/>\n                    <property 
name=\"databaseType\">\n                        <util:constant 
static-field=\"java.sql.Types.INTEGER\"/>\n                    </property>\n    
                <property name=\"javaName\" value=\"id\"/>\n      
               <property name=\"javaType\" value=\"int\"/>\n                
</bean>\n                <bean 
class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    
<property name=\"databaseName\" value=\"ORG_ID\"/>\n                    
<property name=\"databaseType\">\n                        <util:constant 
static-field=\"java.sql.Types.INTEGER\"/>\n                    </property>\n    
                <property name=\"javaName\" value=\"orgId\"/>\n                 
   <property name=\"javaType\" value=\"java.lang.Integer\"/>\n                
</bean>\n                <bean 
class=\"org.apache.ignite.cache.CacheTypeFieldMetadata\">\n                    
<property name=\"databaseName\" value=\"NAME\"/>\n                    <property 
name=\"databaseType\">\n                        <util:constant 
static-field=\"java.sql.Types.VARCHAR\"/>\n                    </property>\n    
                <property name=\"javaName\" value=\"name\"/>\n                  
  <property name
 =\"javaType\" value=\"java.lang.String\"/>\n                </bean>\n          
  </list>\n        </property>\n    </bean>\n</beans>",
-      "language": "xml",
-      "name": "XML Configuration"
-    },
-    {
-      "code": "IgniteConfiguration cfg = new 
IgniteConfiguration();\n...\nCacheConfiguration ccfg = new 
CacheConfiguration<>();\n\nDataSource dataSource = null; // TODO: Create data 
source for your database.\n\n// Create store. \nCacheJdbcPojoStore store = new 
CacheJdbcPojoStore();\nstore.setDataSource(dataSource);\n\n// Create store 
factory. \nccfg.setCacheStoreFactory(new 
FactoryBuilder.SingletonFactory<>(store));\n\n// Configure cache to use store. 
\nccfg.setReadThrough(true);\nccfg.setWriteThrough(true);\n\ncfg.setCacheConfiguration(ccfg);\n\n//
 Configure cache types. \nCollection<CacheTypeMetadata> meta = new 
ArrayList<>();\n\n// PERSON.\nCacheTypeMetadata type = new 
CacheTypeMetadata();\ntype.setDatabaseSchema(\"PUBLIC\");\ntype.setDatabaseTable(\"PERSON\");\ntype.setKeyType(\"org.apache.ignite.examples.datagrid.store.model.PersonKey\");\ntype.setValueType(\"org.apache.ignite.examples.datagrid.store.model.Person\");\n\n//
 Key fields for PERSON.\nCollection<CacheTypeFieldMetada
 ta> keys = new ArrayList<>();\nkeys.add(new CacheTypeFieldMetadata(\"ID\", 
java.sql.Types.INTEGER,\"id\", int.class));\ntype.setKeyFields(keys);\n\n// 
Value fields for PERSON.\nCollection<CacheTypeFieldMetadata> vals = new 
ArrayList<>();\nvals.add(new CacheTypeFieldMetadata(\"ID\", 
java.sql.Types.INTEGER,\"id\", int.class));\nvals.add(new 
CacheTypeFieldMetadata(\"ORG_ID\", java.sql.Types.INTEGER,\"orgId\", 
Integer.class));\nvals.add(new CacheTypeFieldMetadata(\"NAME\", 
java.sql.Types.VARCHAR,\"name\", 
String.class));\ntype.setValueFields(vals);\n...\n// Start Ignite 
node.\nIgnition.start(cfg);",
-      "language": "java",
-      "name": "Java snippet"
-    }
-  ]
-}
-[/block]
-Copy generated POJO java classes to you project source folder.
-
-Copy declaration of CacheTypeMetadata from generated XML file and paste to 
your project XML configuration file under appropriate CacheConfiguration root.
-
-Or paste snippet with cache configuration into appropriate java class in your 
project.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/data-grid/cache-modes.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/cache-modes.md 
b/docs/wiki/data-grid/cache-modes.md
deleted file mode 100755
index 54ef6a4..0000000
--- a/docs/wiki/data-grid/cache-modes.md
+++ /dev/null
@@ -1,237 +0,0 @@
-Ignite provides three different modes of cache operation: `LOCAL`, 
`REPLICATED`, and `PARTITIONED`. A cache mode is configured for each cache. 
Cache modes are defined in `CacheMode` enumeration. 
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Local Mode"
-}
-[/block]
-`LOCAL` mode is the most light weight mode of cache operation, as no data is 
distributed to other cache nodes. It is ideal for scenarios where data is 
either read-only, or can be periodically refreshed at some expiration 
frequency. It also works very well with read-through behavior where data is 
loaded from persistent storage on misses. Other than distribution, local caches 
still have all the features of a distributed cache, such as automatic data 
eviction, expiration, disk swapping, data querying, and transactions.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Replicated Mode"
-}
-[/block]
-In `REPLICATED` mode all data is replicated to every node in the cluster. This 
cache mode provides the utmost availability of data as it is available on every 
node. However, in this mode every data update must be propagated to all other 
nodes which can have an impact on performance and scalability. 
-
-As the same data is stored on all cluster nodes, the size of a replicated 
cache is limited by the amount of memory available on the node with the 
smallest amount of RAM. This mode is ideal for scenarios where cache reads are 
a lot more frequent than cache writes, and data sets are small. If your system 
does cache lookups over 80% of the time, then you should consider using 
`REPLICATED` cache mode.
-[block:callout]
-{
-  "type": "success",
-  "body": "Replicated caches should be used when data sets are small and 
updates are infrequent."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Partitioned Mode"
-}
-[/block]
-`PARTITIONED` mode is the most scalable distributed cache mode. In this mode 
the overall data set is divided equally into partitions and all partitions are 
split equally between participating nodes, essentially creating one huge 
distributed in-memory store for caching data. This approach allows you to store 
as much data as can be fit in the total memory available across all nodes, 
hence allowing for multi-terabytes of data in cache memory across all cluster 
nodes. Essentially, the more nodes you have, the more data you can cache.
-
-Unlike `REPLICATED` mode, where updates are expensive because every node in 
the cluster needs to be updated, with `PARTITIONED` mode, updates become cheap 
because only one primary node (and optionally 1 or more backup nodes) need to 
be updated for every key. However, reads become somewhat more expensive because 
only certain nodes have the data cached. 
-
-In order to avoid extra data movement, it is important to always access the 
data exactly on the node that has that data cached. This approach is called 
*affinity colocation* and is strongly recommended when working with partitioned 
caches.
-[block:callout]
-{
-  "type": "success",
-  "body": "Partitioned caches are idea when working with large data sets and 
updates are frequent.",
-  "title": ""
-}
-[/block]
-The picture below illustrates a simple view of a partitioned cache. 
Essentially we have key K1 assigned to Node1, K2 assigned to Node2, and K3 
assigned to Node3. 
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/7pGSgxCVR3OZSHqYLdJv";,
-        "in_memory_data_grid.png",
-        "500",
-        "338",
-        "#d64304",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-See [configuration](#configuration) section below for an example on how to 
configure cache mode.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Cache Distribution Mode"
-}
-[/block]
-Node can operate in four different cache distribution modes when `PARTITIONED` 
mode is used. Cache distribution mode is defined by `CacheDistributionMode` 
enumeration and can be configured via `distributionMode` property of 
`CacheConfiguration`.
-[block:parameters]
-{
-  "data": {
-    "h-0": "Distribution Mode",
-    "h-1": "Description",
-    "0-0": "`PARTITIONED_ONLY`",
-    "0-1": "Local node may store primary and/or backup keys, but does not 
cache recently accessed keys, which are neither primaries or backups, in near 
cache.",
-    "1-0": "`CLIENT_ONLY`",
-    "1-1": "Local node does not cache any data and communicates with other 
cache nodes via remote calls.",
-    "2-0": "`NEAR_ONLY`",
-    "2-1": "Local node will not be primary or backup node for any key, but 
will cache recently accessed keys in a smaller near cache. Amount of recently 
accessed keys to cache is controlled by near eviction policy.",
-    "3-0": "`NEAR_PARTITIONED`",
-    "3-1": "Local node may store primary and/or backup keys, and also will 
cache recently accessed keys in near cache. Amount of recently accessed keys to 
cache is controlled by near eviction policy."
-  },
-  "cols": 2,
-  "rows": 4
-}
-[/block]
-By default `PARTITIONED_ONLY` cache distribution mode is enabled. It can be 
turned on or off by setting the `distributionMode` configuration property in 
`CacheConfiguration`. For example:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          \t<!-- 
Set a cache name. -->\n           \t<property name=\"name\" 
value=\"cacheName\"/>\n            \n          \t<!-- cache distribution mode. 
-->\n    \t\t\t\t<property name=\"distributionMode\" value=\"NEAR_ONLY\"/>\n    
\t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setDistributionMode(CacheDistributionMode.NEAR_ONLY);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Atomic Write Order Mode"
-}
-[/block]
-When using partitioned cache in `CacheAtomicityMode.ATOMIC` mode, one can 
configure atomic cache write order mode. Atomic write order mode determines 
which node will assign write version (sender or primary node) and is defined by 
`CacheAtomicWriteOrderMode` enumeration. There are 2 modes, `CLOCK` and 
`PRIMARY`. 
-
-In `CLOCK` write order mode, write versions are assigned on a sender node. 
`CLOCK` mode is automatically turned on only when 
`CacheWriteSynchronizationMode.FULL_SYNC` is used, as it  generally leads to 
better performance since write requests to primary and backups nodes are sent 
at the same time. 
-
-In `PRIMARY` write order mode, cache version is assigned only on primary node. 
In this mode the sender will only send write requests to primary nodes, which 
in turn will assign write version and forward them to backups.
-
-Atomic write order mode can be configured via `atomicWriteOrderMode` property 
of `CacheConfiguration`. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n           
\t<!-- Set a cache name. -->\n           \t<property name=\"name\" 
value=\"cacheName\"/>\n          \t\n          \t<!-- Atomic write order mode. 
-->\n    \t\t\t\t<property name=\"atomicWriteOrderMode\" value=\"PRIMARY\"/>\n  
  \t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setAtomicWriteOrderMode(CacheAtomicWriteOrderMode.CLOCK);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:callout]
-{
-  "type": "info",
-  "body": "For more information on `ATOMIC` mode, refer to 
[Transactions](/docs/transactions) section."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Primary and Backup Nodes"
-}
-[/block]
-In `PARTITIONED` mode, nodes to which the keys are assigned to are called 
primary nodes for those keys. You can also optionally configure any number of 
backup nodes for cached data. If the number of backups is greater than 0, then 
Ignite will automatically assign backup nodes for each individual key. For 
example if the number of backups is 1, then every key cached in the data grid 
will have 2 copies, 1 primary and 1 backup.
-[block:callout]
-{
-  "type": "info",
-  "body": "By default, backups are turned off for better performance."
-}
-[/block]
-Backups can be configured by setting `backups()` property of 
'CacheConfiguration`, like so:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n           
\t<!-- Set a cache name. -->\n           \t<property name=\"name\" 
value=\"cacheName\"/>\n          \n          \t<!-- Set cache mode. -->\n    
\t\t\t\t<property name=\"cacheMode\" value=\"PARTITIONED\"/>\n          \t\n    
      \t<!-- Number of backup nodes. -->\n    \t\t\t\t<property 
name=\"backups\" value=\"1\"/>\n    \t\t\t\t... \n        </bean\n    
</property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setCacheMode(CacheMode.PARTITIONED);\n\ncacheCfg.setBackups(1);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Near Caches"
-}
-[/block]
-A partitioned cache can also be fronted by a `Near` cache, which is a smaller 
local cache that stores most recently or most frequently accessed data. Just 
like with a partitioned cache, the user can control the size of the near cache 
and its eviction policies. 
-
-In the vast majority of use cases, whenever utilizing Ignite with affinity 
colocation, near caches should not be used. If computations are collocated with 
the proper partition cache nodes then the near cache is simply not needed 
because all the data is available locally in the partitioned cache.
-
-However, there are cases when it is simply impossible to send computations to 
remote nodes. For cases like this near caches can significantly improve 
scalability and the overall performance of the application.
-
-Following are configuration parameters related to near cache. These parameters 
make sense for `PARTITIONED` cache only.
-[block:parameters]
-{
-  "data": {
-    "0-0": "`setNearEvictionPolicy(CacheEvictionPolicy)`",
-    "h-0": "Setter Method",
-    "h-1": "Description",
-    "h-2": "Default",
-    "0-1": "Eviction policy for near cache.",
-    "0-2": "`CacheLruEvictionPolicy` with max size of 10,000.",
-    "1-0": "`setEvictNearSynchronized(boolean)`",
-    "1-1": "Flag indicating whether eviction is synchronized with near caches 
on remote nodes.",
-    "1-2": "true",
-    "2-0": "`setNearStartSize(int)`",
-    "3-0": "",
-    "2-2": "256",
-    "2-1": "Start size for near cache."
-  },
-  "cols": 3,
-  "rows": 3
-}
-[/block]
-
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n          \t<!-- 
Set a cache name. -->\n           \t<property name=\"name\" 
value=\"cacheName\"/>\n          \n           \t<!-- Start size for near cache. 
-->\n    \t\t\t\t<property name=\"nearStartSize\" value=\"512\"/>\n \n          
  <!-- Configure LRU eviction policy for near cache. -->\n            <property 
name=\"nearEvictionPolicy\">\n                <bean 
class=\"org.apache.ignite.cache.eviction.lru.CacheLruEvictionPolicy\">\n        
            <!-- Set max size to 1000. -->\n                    <property 
name=\"maxSize\" value=\"1000\"/>\n                </bean>\n            
</property>\n    \t\t\t\t... \n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setNearStartSize(512);\n\nCacheLruEvictionPolicy
 evctPolicy = new 
CacheLruEvictionPolicy();\nevctPolicy.setMaxSize(1000);\n\ncacheCfg.setNearEvictionPolicy(evctPolicy);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Configuration"
-}
-[/block]
-
-Cache modes are configured for each cache by setting the `cacheMode` property 
of `CacheConfiguration` like so:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "<bean 
class=\"org.apache.ignite.configuration.IgniteConfiguration\">\n  \t...\n    
<property name=\"cacheConfiguration\">\n        <bean 
class=\"org.apache.ignite.configuration.CacheConfiguration\">\n           
\t<!-- Set a cache name. -->\n           \t<property name=\"name\" 
value=\"cacheName\"/>\n            \n          \t<!-- Set cache mode. -->\n    
\t\t\t\t<property name=\"cacheMode\" value=\"PARTITIONED\"/>\n    \t\t\t\t... 
\n        </bean\n    </property>\n</bean>",
-      "language": "xml"
-    },
-    {
-      "code": "CacheConfiguration cacheCfg = new 
CacheConfiguration();\n\ncacheCfg.setName(\"cacheName\");\n\ncacheCfg.setCacheMode(CacheMode.PARTITIONED);\n\nIgniteConfiguration
 cfg = new IgniteConfiguration();\n\ncfg.setCacheConfiguration(cacheCfg);\n\n// 
Start Ignite node.\nIgnition.start(cfg);",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/data-grid/cache-queries.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/cache-queries.md 
b/docs/wiki/data-grid/cache-queries.md
deleted file mode 100755
index cf3db16..0000000
--- a/docs/wiki/data-grid/cache-queries.md
+++ /dev/null
@@ -1,164 +0,0 @@
-Ignite supports a very elegant query API with support for
-
-  * [Predicate-based Scan Queries](#scan-queries)
-  * [SQL Queries](#sql-queries)
-  * [Text Queries](#text-queries)
-  * [Continuous Queries](#continuous-queries)
-  
-For SQL queries ignites supports in-memory indexing, so all the data lookups 
are extremely fast. If you are caching your data in [off-heap 
memory](doc:off-heap-memory), then query indexes will also be cached in 
off-heap memory as well.
-
-Ignite also provides support for custom indexing via `IndexingSpi` and 
`SpiQuery` class.
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Main Abstractions"
-}
-[/block]
-`IgniteCache` has several query methods all of which receive some sublcass of 
`Query` class and return `QueryCursor`.
-##Query
-`Query` abstract class represents an abstract paginated query to be executed 
on the distributed cache. You can set the page size for the returned cursor via 
`Query.setPageSize(...)` method (default is `1024`).
-
-##QueryCursor
-`QueryCursor` represents query result set and allows for transparent 
page-by-page iteration. Whenever user starts iterating over the last page, it 
will automatically request the next page in the background. For cases when 
pagination is not needed, you can use `QueryCursor.getAll()` method which will 
fetch the whole query result and store it in a collection.
-[block:callout]
-{
-  "type": "info",
-  "title": "Closing Cursors",
-  "body": "Cursors will close automatically if you iterate to the end of the 
result set. If you need to stop iteration sooner, you must close() the cursor 
explicitly or use `AutoCloseable` syntax."
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Scan Queries"
-}
-[/block]
-Scan queries allow for querying cache in distributed form based on some user 
defined predicate. 
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\n// Find only persons earning more than 
1,000.\ntry (QueryCursor cursor = cache.query(new ScanQuery((k, p) -> 
p.getSalary() > 1000)) {\n  for (Person p : cursor)\n    
System.out.println(p.toString());\n}",
-      "language": "java",
-      "name": "scan"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\n// Find only persons earning more than 
1,000.\nIgniteBiPredicate<Long, Person> filter = new IgniteByPredicate<>() {\n  
@Override public boolean apply(Long key, Perons p) {\n  \treturn p.getSalary() 
> 1000;\n\t}\n};\n\ntry (QueryCursor cursor = cache.query(new 
ScanQuery(filter)) {\n  for (Person p : cursor)\n    
System.out.println(p.toString());\n}",
-      "language": "java",
-      "name": "java7 scan"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "SQL Queries"
-}
-[/block]
-Ignite supports free-form SQL queries virtually without any limitations. SQL 
syntax is ANSI-99 compliant. You can use any SQL function, any aggregation, any 
grouping and Ignite will figure out where to fetch the results from.
-
-##SQL Joins
-Ignite supports distributed SQL joins. Moreover, if data resides in different 
caches, Ignite allows for cross-cache joins as well. 
-
-Joins between `PARTITIONED` and `REPLICATED` caches always work without any 
limitations. However, if you do a join between two `PARTITIONED` data sets, 
then you must make sure that the keys you are joining on are **collocated**. 
-
-##Field Queries
-Instead of selecting the whole object, you can choose to select only specific 
fields in order to minimize network and serialization overhead. For this 
purpose Ignite has a concept of `fields queries`.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\nSqlQuery sql = new SqlQuery(Person.class, 
\"salary > ?\");\n\n// Find only persons earning more than 1,000.\ntry 
(QueryCursor<Entry<Long, Person> cursor = cache.query(sql.setArgs(1000)) {\n  
for (Entry<Long, Person> e : cursor)\n    
System.out.println(e.getValue().toString());\n}",
-      "language": "java",
-      "name": "sql"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\n// SQL join on Person and 
Organization.\nSqlQuery sql = new SqlQuery(Person.class,\n  \"from Person, 
Organization \"\n  + \"where Person.orgId = Organization.id \"\n  + \"and 
lower(Organization.name) = lower(?)\");\n\n// Find all persons working for 
Ignite organization.\ntry (QueryCursor<Entry<Long, Person> cursor = 
cache.query(sql.setArgs(\"Ignite\")) {\n  for (Entry<Long, Person> e : 
cursor)\n    System.out.println(e.getValue().toString());\n}",
-      "language": "java",
-      "name": "sql join"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\nSqlFieldsQuery sql = new SqlFieldsQuery(\"select 
concat(firstName, ' ', lastName) from Person\");\n\n// Select concatinated 
first and last name for all persons.\ntry (QueryCursor<List<?>> cursor = 
cache.query(sql) {\n  for (List<?> row : cursor)\n    System.out.println(\"Full 
name: \" + row.get(0));\n}",
-      "language": "java",
-      "name": "sql fields"
-    },
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\n// Select with join between Person and 
Organization.\nSqlFieldsQuery sql = new SqlFieldsQuery(\n  \"select 
concat(firstName, ' ', lastName), Organization.name \"\n  + \"from Person, 
Organization where \"\n  + \"Person.orgId = Organization.id and \"\n  + 
\"Person.salary > ?\");\n\n// Only find persons with salary > 1000.\ntry 
(QueryCursor<List<?>> cursor = cache.query(sql.setArgs(1000)) {\n  for (List<?> 
row : cursor)\n    System.out.println(\"personName=\" + row.get(0) + \", 
orgName=\" + row.get(1));\n}",
-      "language": "java",
-      "name": "sql fields & join"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Text Queries"
-}
-[/block]
-Ignite also supports text-based queries based on Lucene indexing.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Long, Person> cache = 
ignite.jcache(\"mycache\");\n\n// Query for all people with \"Master Degree\" 
in their resumes.\nTextQuery txt = new TextQuery(Person.class, \"Master 
Degree\");\n\ntry (QueryCursor<Entry<Long, Person>> masters = cache.query(txt)) 
{\n  for (Entry<Long, Person> e : cursor)\n    
System.out.println(e.getValue().toString());\n}",
-      "language": "java",
-      "name": "text query"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Continuous Queries"
-}
-[/block]
-Continuous queries are good for cases when you want to execute a query and 
then continue to get notified about the data changes that fall into your query 
filter.
-
-Continuous queries are supported via `ContinuousQuery` class, which supports 
the following:
-## Initial Query
-Whenever executing continuous query, you have an option to execution initial 
query before starting to listen to updates. The initial query can be set via 
`ContinuousQuery.setInitialQuery(Query)` method and can be of any query type, 
[Scan](#scan-queries), [SQL](#sql-queries), or [TEXT](#text-queries). This 
parameter is optional, and if not set, will not be used.
-## Remote Filter
-This filter is executed on the primary node for a given key and evaluates 
whether the event should be propagated to the listener. If the filter returns 
`true`, then the listener will be notified, otherwise the event will be 
skipped. Filtering events on the node on which they have occurred allows to 
minimize unnecessary network traffic for listener notifications. Remote filter 
can be set via `ContinuousQuery.setRemoteFilter(CacheEntryEventFilter<K, V>)` 
method.
-## Local Listener
-Whenever events pass the remote filter, they will be send to the client to 
notify the local listener there. Local listener is set via 
`ContinuousQuery.setLocalListener(CacheEntryUpdatedListener<K, V>)` method.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "IgniteCache<Integer, String> cache = 
ignite.jcache(\"mycache\");\n\n// Create new continuous 
query.\nContinuousQuery<Integer, String> qry = new ContinuousQuery<>();\n\n// 
Optional initial query to select all keys greater than 
10.\nqry.setInitialQuery(new ScanQuery<Integer, String>((k, v) -> k > 
10)):\n\n// Callback that is called locally when update notifications are 
received.\nqry.setLocalListener((evts) -> \n\tevts.stream().forEach(e -> 
System.out.println(\"key=\" + e.getKey() + \", val=\" + e.getValue())));\n\n// 
This filter will be evaluated remotely on all nodes.\n// Entry that pass this 
filter will be sent to the caller.\nqry.setRemoteFilter(e -> e.getKey() > 
10);\n\n// Execute query.\ntry (QueryCursor<Cache.Entry<Integer, String>> cur = 
cache.query(qry)) {\n  // Iterate through existing data stored in cache.\n  for 
(Cache.Entry<Integer, String> e : cur)\n    System.out.println(\"key=\" + 
e.getKey() + \", val=\" + e.getValue());\n\n  // Add a few more keys and w
 atch a few more query notifications.\n  for (int i = 5; i < 15; i++)\n    
cache.put(i, Integer.toString(i));\n}\n",
-      "language": "java",
-      "name": "continuous query"
-    },
-    {
-      "code": "IgniteCache<Integer, String> cache = 
ignite.jcache(CACHE_NAME);\n\n// Create new continuous 
query.\nContinuousQuery<Integer, String> qry = new 
ContinuousQuery<>();\n\nqry.setInitialQuery(new ScanQuery<Integer, String>(new 
IgniteBiPredicate<Integer, String>() {\n  @Override public boolean 
apply(Integer key, String val) {\n    return key > 10;\n  }\n}));\n\n// 
Callback that is called locally when update notifications are 
received.\nqry.setLocalListener(new CacheEntryUpdatedListener<Integer, 
String>() {\n  @Override public void onUpdated(Iterable<CacheEntryEvent<? 
extends Integer, ? extends String>> evts) {\n    for (CacheEntryEvent<Integer, 
String> e : evts)\n      System.out.println(\"key=\" + e.getKey() + \", val=\" 
+ e.getValue());\n  }\n});\n\n// This filter will be evaluated remotely on all 
nodes.\n// Entry that pass this filter will be sent to the 
caller.\nqry.setRemoteFilter(new CacheEntryEventFilter<Integer, String>() {\n  
@Override public boolean evaluate(Cache
 EntryEvent<? extends Integer, ? extends String> e) {\n    return e.getKey() > 
10;\n  }\n});\n\n// Execute query.\ntry (QueryCursor<Cache.Entry<Integer, 
String>> cur = cache.query(qry)) {\n  // Iterate through existing data.\n  for 
(Cache.Entry<Integer, String> e : cur)\n    System.out.println(\"key=\" + 
e.getKey() + \", val=\" + e.getValue());\n\n  // Add a few more keys and watch 
more query notifications.\n  for (int i = keyCnt; i < keyCnt + 10; i++)\n    
cache.put(i, Integer.toString(i));\n}",
-      "language": "java",
-      "name": "java7 continuous query"
-    }
-  ]
-}
-[/block]
-
-[block:api-header]
-{
-  "type": "basic",
-  "title": "Query Configuration"
-}
-[/block]
-Queries can be configured from code by using `@QuerySqlField` annotations.
-[block:code]
-{
-  "codes": [
-    {
-      "code": "public class Person implements Serializable {\n  /** Person ID 
(indexed). */\n  @QuerySqlField(index = true)\n  private long id;\n\n  /** 
Organization ID (indexed). */\n  @QuerySqlField(index = true)\n  private long 
orgId;\n\n  /** First name (not-indexed). */\n  @QuerySqlField\n  private 
String firstName;\n\n  /** Last name (not indexed). */\n  @QuerySqlField\n  
private String lastName;\n\n  /** Resume text (create LUCENE-based TEXT index 
for this field). */\n  @QueryTextField\n  private String resume;\n\n  /** 
Salary (indexed). */\n  @QuerySqlField(index = true)\n  private double 
salary;\n  \n  ...\n}",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-ignite/blob/8d15d8f6/docs/wiki/data-grid/data-grid.md
----------------------------------------------------------------------
diff --git a/docs/wiki/data-grid/data-grid.md b/docs/wiki/data-grid/data-grid.md
deleted file mode 100755
index 3f65adf..0000000
--- a/docs/wiki/data-grid/data-grid.md
+++ /dev/null
@@ -1,68 +0,0 @@
-Ignite in-memory data grid has been built from the ground up with a notion of 
horizontal scale and ability to add nodes on demand in real-time; it has been 
designed to linearly scale to hundreds of nodes with strong semantics for data 
locality and affinity data routing to reduce redundant data noise.
-
-Ignite data grid supports local, replicated, and partitioned data sets and 
allows to freely cross query between these data sets using standard SQL syntax. 
Ignite supports standard SQL for querying in-memory data including support for 
distributed SQL joins. 
-
-Ignite data grid is lightning fast and is one of the fastest implementations 
of transactional or atomic data in a  cluster today.
-[block:callout]
-{
-  "type": "success",
-  "title": "Data Consistency",
-  "body": "As long as your cluster is alive, Ignite will guarantee that the 
data between different cluster nodes will always remain consistent regardless 
of crashes or topology changes."
-}
-[/block]
-
-[block:callout]
-{
-  "type": "success",
-  "title": "JCache (JSR 107)",
-  "body": "Ignite Data Grid implements [JCache](doc:jcache) (JSR 107) 
specification (currently undergoing JSR 107 TCK testing)"
-}
-[/block]
-
-[block:image]
-{
-  "images": [
-    {
-      "image": [
-        "https://www.filepicker.io/api/file/ZBWQwPXbQmyq6RRUyWfm";,
-        "in-memory-data-grid-1.jpg",
-        "500",
-        "338",
-        "#e8893c",
-        ""
-      ]
-    }
-  ]
-}
-[/block]
-##Features
-  * Distributed In-Memory Caching
-  * Lightning Fast Performance
-  * Elastic Scalability
-  * Distributed In-Memory Transactions
-  * Web Session Clustering
-  * Hibernate L2 Cache Integration
-  * Tiered Off-Heap Storage
-  * Distributed SQL Queries with support for Joins
-[block:api-header]
-{
-  "type": "basic",
-  "title": "IgniteCache"
-}
-[/block]
-`IgniteCache` interface is a gateway into Ignite cache implementation and 
provides methods for storing and retrieving data, executing queries, including 
SQL, iterating and scanning, etc.
-
-##JCache
-`IgniteCache` interface extends `javax.cache.Cache` interface from JCache 
specification and adds additional functionality to it, mainly having to do with 
local vs. distributed operations, queries, metrics, etc.
-
-You can obtain an instance of `IgniteCache` as follows:
-[block:code]
-{
-  "codes": [
-    {
-      "code": "Ignite ignite = Ignition.ignite();\n\n// Obtain instance of 
cache named \"myCache\".\n// Note that different caches may have different 
generics.\nIgniteCache<Integer, String> cache = ignite.jcache(\"myCache\");",
-      "language": "java"
-    }
-  ]
-}
-[/block]
\ No newline at end of file

Reply via email to