Concourse TLS cert has expired

2020-12-02 Thread Dan Smith
If I go to https://concourse.apachegeode-ci.info/ I get this error - who can 
fix this?

The certificate for concourse.apachegeode-ci.info expired on 12/2/2020.

-Dan


Re: [PROPOSAL] Change the default value of conserve-sockets to false

2020-12-02 Thread Barrett Oglesby
I ran a bunch of tests using the long-running-test code where the servers had a 
mix of conserve-sockets settings, and they all worked ok.

One set of tests had 6 servers - 3 with conserve-sockets=false and 3 with 
conserve-sockets=true.

Another set of tests had 4 servers - 3 with conserve-sockets=false and 1 with 
conserve-sockets=true.

In each case, the multi-threaded client did:

- puts
- gets
- destroys
- function updates
- oql queries

One thing I found interesting was the server where the operation originated 
dictated which thread was used on the remote server. If the server where the 
operation originated had conserve-sockets=false, then the remote server used an 
unshared P2P message reader to process the replication no matter what its 
conserve-sockets setting was. And if the server where the operation originated 
had conserve-sockets=true, then the remote server used a shared P2P message 
reader to process the replication no matter what its conserve-sockets setting 
was.

Here is some logging from a DistributionMessageObserver that shows that 
behavior.

Case 1:

The server (server1) that processes the put operation from the client is 
primary and has conserve-sockets=false.
The server (server2) that handles the UpdateWithContextMessage has 
conserve-sockets=true.

1. A ServerConnection thread in server1 sends the UpdateWithContextMessage:

ServerConnection on port 60802 Thread 4: TestDistributionMessageObserver 
operation=beforeSendMessage; time=1606929894787; 
message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; op=UPDATE; key=0; newValue=(10485820 bytes)); 
recipients=[192.168.1.8(server-conserve-sockets1:58995):41002]

2. An unshared P2P message reader in server2 handles the 
UpdateWithContextMessage even though conserve-sockets=true:

P2P message reader for 192.168.1.8(server1:58984):41001 unshared ordered 
uid=11 dom #1 local port=58405 remote port=60860: DistributionMessage.schedule 
msg=UpdateOperation$UpdateWithContextMessage(region path='/__PR/_B__data_48'; 
sender=192.168.1.8(server1:58984):41001; op=UPDATE; key=0; 
newValue=(10485820 bytes))
P2P message reader for 192.168.1.8(server1:58984):41001 unshared ordered 
uid=11 dom #1 local port=58405 remote port=60860: 
TestDistributionMessageObserver operation=beforeProcessMessage; 
time=1606929894809; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; sender=192.168.1.8(server1:58984):41001; 
op=UPDATE; key=0; newValue=(10485820 bytes)); recipients=[null]
P2P message reader for 192.168.1.8(server1:58984):41001 unshared ordered 
uid=11 dom #1 local port=58405 remote port=60860: 
TestDistributionMessageObserver operation=afterProcessMessage; 
time=1606929894810; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; sender=192.168.1.8(server1:58984):41001; 
op=UPDATE; key=0; newValue=(10485820 bytes)); recipients=[null]

Case 2:

The server (server1) that processes the put operation from the client is 
primary and has conserve-sockets=true.
The server (server2) that handles the UpdateWithContextMessage has 
conserve-sockets=false.

1. A ServerConnection thread in server1 sends the UpdateWithContextMessage:

ServerConnection on port 61474 Thread 1: TestDistributionMessageObserver 
operation=beforeSendMessage; time=1606932400283; 
message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; op=UPDATE; key=0; newValue=(10485820 bytes)); 
recipients=[192.168.1.8(server1:63224):41001]

2. The shared P2P message reader in server2 handles the 
UpdateWithContextMessage and sends the ReplyMessage even though 
conserve-sockets=false:

P2P message reader for 192.168.1.8(server-conserve-sockets1:63240):41002 
shared ordered uid=4 local port=54619 remote port=61472: 
TestDistributionMessageObserver operation=beforeProcessMessage; 
time=1606932400295; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; 
sender=192.168.1.8(server-conserve-sockets1:63240):41002; op=UPDATE; 
key=0; newValue=(10485820 bytes)); recipients=[null]
P2P message reader for 192.168.1.8(server-conserve-sockets1:63240):41002 
shared ordered uid=4 local port=54619 remote port=61472: 
TestDistributionMessageObserver operation=beforeSendMessage; 
time=1606932400296; message=ReplyMessage processorId=42 from null; 
recipients=[192.168.1.8(server-conserve-sockets1:63240):41002]
P2P message reader for 192.168.1.8(server-conserve-sockets1:63240):41002 
shared ordered uid=4 local port=54619 remote port=61472: 
TestDistributionMessageObserver operation=afterProcessMessage; 
time=1606932400296; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; 
sender=192.168.1.8(server-conserve-sockets1:63240):41002; op=UPDATE; 
key=0; newValue=(10485820 bytes)); recipients=[null]

3. The shared P2P message reader in server1 handles the ReplyMessage:

P2P message reader for 192.168.1.8(server1:63224):41001 shared unordered 
uid=3 loca

Re: [PROPOSAL] Change the default value of conserve-sockets to false

2020-12-02 Thread Xiaojian Zhou
+1
I think it’s good to change back the default to be false. It was false before.

From: Barrett Oglesby 
Date: Wednesday, December 2, 2020 at 3:14 PM
To: dev@geode.apache.org 
Subject: Re: [PROPOSAL] Change the default value of conserve-sockets to false
I ran a bunch of tests using the long-running-test code where the servers had a 
mix of conserve-sockets settings, and they all worked ok.

One set of tests had 6 servers - 3 with conserve-sockets=false and 3 with 
conserve-sockets=true.

Another set of tests had 4 servers - 3 with conserve-sockets=false and 1 with 
conserve-sockets=true.

In each case, the multi-threaded client did:

- puts
- gets
- destroys
- function updates
- oql queries

One thing I found interesting was the server where the operation originated 
dictated which thread was used on the remote server. If the server where the 
operation originated had conserve-sockets=false, then the remote server used an 
unshared P2P message reader to process the replication no matter what its 
conserve-sockets setting was. And if the server where the operation originated 
had conserve-sockets=true, then the remote server used a shared P2P message 
reader to process the replication no matter what its conserve-sockets setting 
was.

Here is some logging from a DistributionMessageObserver that shows that 
behavior.

Case 1:

The server (server1) that processes the put operation from the client is 
primary and has conserve-sockets=false.
The server (server2) that handles the UpdateWithContextMessage has 
conserve-sockets=true.

1. A ServerConnection thread in server1 sends the UpdateWithContextMessage:

ServerConnection on port 60802 Thread 4: TestDistributionMessageObserver 
operation=beforeSendMessage; time=1606929894787; 
message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; op=UPDATE; key=0; newValue=(10485820 bytes)); 
recipients=[192.168.1.8(server-conserve-sockets1:58995):41002]

2. An unshared P2P message reader in server2 handles the 
UpdateWithContextMessage even though conserve-sockets=true:

P2P message reader for 192.168.1.8(server1:58984):41001 unshared ordered 
uid=11 dom #1 local port=58405 remote port=60860: DistributionMessage.schedule 
msg=UpdateOperation$UpdateWithContextMessage(region path='/__PR/_B__data_48'; 
sender=192.168.1.8(server1:58984):41001; op=UPDATE; key=0; 
newValue=(10485820 bytes))
P2P message reader for 192.168.1.8(server1:58984):41001 unshared ordered 
uid=11 dom #1 local port=58405 remote port=60860: 
TestDistributionMessageObserver operation=beforeProcessMessage; 
time=1606929894809; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; sender=192.168.1.8(server1:58984):41001; 
op=UPDATE; key=0; newValue=(10485820 bytes)); recipients=[null]
P2P message reader for 192.168.1.8(server1:58984):41001 unshared ordered 
uid=11 dom #1 local port=58405 remote port=60860: 
TestDistributionMessageObserver operation=afterProcessMessage; 
time=1606929894810; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; sender=192.168.1.8(server1:58984):41001; 
op=UPDATE; key=0; newValue=(10485820 bytes)); recipients=[null]

Case 2:

The server (server1) that processes the put operation from the client is 
primary and has conserve-sockets=true.
The server (server2) that handles the UpdateWithContextMessage has 
conserve-sockets=false.

1. A ServerConnection thread in server1 sends the UpdateWithContextMessage:

ServerConnection on port 61474 Thread 1: TestDistributionMessageObserver 
operation=beforeSendMessage; time=1606932400283; 
message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; op=UPDATE; key=0; newValue=(10485820 bytes)); 
recipients=[192.168.1.8(server1:63224):41001]

2. The shared P2P message reader in server2 handles the 
UpdateWithContextMessage and sends the ReplyMessage even though 
conserve-sockets=false:

P2P message reader for 192.168.1.8(server-conserve-sockets1:63240):41002 
shared ordered uid=4 local port=54619 remote port=61472: 
TestDistributionMessageObserver operation=beforeProcessMessage; 
time=1606932400295; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; 
sender=192.168.1.8(server-conserve-sockets1:63240):41002; op=UPDATE; 
key=0; newValue=(10485820 bytes)); recipients=[null]
P2P message reader for 192.168.1.8(server-conserve-sockets1:63240):41002 
shared ordered uid=4 local port=54619 remote port=61472: 
TestDistributionMessageObserver operation=beforeSendMessage; 
time=1606932400296; message=ReplyMessage processorId=42 from null; 
recipients=[192.168.1.8(server-conserve-sockets1:63240):41002]
P2P message reader for 192.168.1.8(server-conserve-sockets1:63240):41002 
shared ordered uid=4 local port=54619 remote port=61472: 
TestDistributionMessageObserver operation=afterProcessMessage; 
time=1606932400296; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; 
sender=192.168.1.8(serve

Re: [PROPOSAL] Change the default value of conserve-sockets to false

2020-12-02 Thread Xiaojian Zhou
OK, I double checked, my memory is wrong. It was true as early as 6.0.

From: Xiaojian Zhou 
Date: Wednesday, December 2, 2020 at 3:29 PM
To: dev@geode.apache.org 
Subject: Re: [PROPOSAL] Change the default value of conserve-sockets to false
+1
I think it’s good to change back the default to be false. It was false before.

From: Barrett Oglesby 
Date: Wednesday, December 2, 2020 at 3:14 PM
To: dev@geode.apache.org 
Subject: Re: [PROPOSAL] Change the default value of conserve-sockets to false
I ran a bunch of tests using the long-running-test code where the servers had a 
mix of conserve-sockets settings, and they all worked ok.

One set of tests had 6 servers - 3 with conserve-sockets=false and 3 with 
conserve-sockets=true.

Another set of tests had 4 servers - 3 with conserve-sockets=false and 1 with 
conserve-sockets=true.

In each case, the multi-threaded client did:

- puts
- gets
- destroys
- function updates
- oql queries

One thing I found interesting was the server where the operation originated 
dictated which thread was used on the remote server. If the server where the 
operation originated had conserve-sockets=false, then the remote server used an 
unshared P2P message reader to process the replication no matter what its 
conserve-sockets setting was. And if the server where the operation originated 
had conserve-sockets=true, then the remote server used a shared P2P message 
reader to process the replication no matter what its conserve-sockets setting 
was.

Here is some logging from a DistributionMessageObserver that shows that 
behavior.

Case 1:

The server (server1) that processes the put operation from the client is 
primary and has conserve-sockets=false.
The server (server2) that handles the UpdateWithContextMessage has 
conserve-sockets=true.

1. A ServerConnection thread in server1 sends the UpdateWithContextMessage:

ServerConnection on port 60802 Thread 4: TestDistributionMessageObserver 
operation=beforeSendMessage; time=1606929894787; 
message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; op=UPDATE; key=0; newValue=(10485820 bytes)); 
recipients=[192.168.1.8(server-conserve-sockets1:58995):41002]

2. An unshared P2P message reader in server2 handles the 
UpdateWithContextMessage even though conserve-sockets=true:

P2P message reader for 192.168.1.8(server1:58984):41001 unshared ordered 
uid=11 dom #1 local port=58405 remote port=60860: DistributionMessage.schedule 
msg=UpdateOperation$UpdateWithContextMessage(region path='/__PR/_B__data_48'; 
sender=192.168.1.8(server1:58984):41001; op=UPDATE; key=0; 
newValue=(10485820 bytes))
P2P message reader for 192.168.1.8(server1:58984):41001 unshared ordered 
uid=11 dom #1 local port=58405 remote port=60860: 
TestDistributionMessageObserver operation=beforeProcessMessage; 
time=1606929894809; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; sender=192.168.1.8(server1:58984):41001; 
op=UPDATE; key=0; newValue=(10485820 bytes)); recipients=[null]
P2P message reader for 192.168.1.8(server1:58984):41001 unshared ordered 
uid=11 dom #1 local port=58405 remote port=60860: 
TestDistributionMessageObserver operation=afterProcessMessage; 
time=1606929894810; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; sender=192.168.1.8(server1:58984):41001; 
op=UPDATE; key=0; newValue=(10485820 bytes)); recipients=[null]

Case 2:

The server (server1) that processes the put operation from the client is 
primary and has conserve-sockets=true.
The server (server2) that handles the UpdateWithContextMessage has 
conserve-sockets=false.

1. A ServerConnection thread in server1 sends the UpdateWithContextMessage:

ServerConnection on port 61474 Thread 1: TestDistributionMessageObserver 
operation=beforeSendMessage; time=1606932400283; 
message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; op=UPDATE; key=0; newValue=(10485820 bytes)); 
recipients=[192.168.1.8(server1:63224):41001]

2. The shared P2P message reader in server2 handles the 
UpdateWithContextMessage and sends the ReplyMessage even though 
conserve-sockets=false:

P2P message reader for 192.168.1.8(server-conserve-sockets1:63240):41002 
shared ordered uid=4 local port=54619 remote port=61472: 
TestDistributionMessageObserver operation=beforeProcessMessage; 
time=1606932400295; message=UpdateOperation$UpdateWithContextMessage(region 
path='/__PR/_B__data_48'; 
sender=192.168.1.8(server-conserve-sockets1:63240):41002; op=UPDATE; 
key=0; newValue=(10485820 bytes)); recipients=[null]
P2P message reader for 192.168.1.8(server-conserve-sockets1:63240):41002 
shared ordered uid=4 local port=54619 remote port=61472: 
TestDistributionMessageObserver operation=beforeSendMessage; 
time=1606932400296; message=ReplyMessage processorId=42 from null; 
recipients=[192.168.1.8(server-conserve-sockets1:63240):41002]
P2P message reader for 192.168.1.8(server-conserve-sockets1:63240):41002 
shared o

Re: Geode - store and query JSON documents

2020-12-02 Thread ankit Soni
Thanks a lot Xiaojian Zhou for your clear explanation and detailed reply.
This has helped a lot to proceed with my experiments.

Ankit.

On Fri, Nov 27, 2020, 5:48 AM Xiaojian Zhou  wrote:

> Ankit:
>
> I wrote some lucene sample code using your data and query.
>
> I also provided gfsh commands to create nested query.
>
> Note: I purposely provided 2 data to show the difference of query.
>
> package examples;
>
> import org.apache.geode.cache.Region;
> import org.apache.geode.cache.client.ClientCache;
> import org.apache.geode.cache.client.ClientCacheFactory;
> import org.apache.geode.cache.client.ClientRegionShortcut;
> import org.apache.geode.cache.lucene.LuceneQuery;
> import org.apache.geode.cache.lucene.LuceneQueryException;
> import org.apache.geode.cache.lucene.LuceneServiceProvider;
> import org.apache.geode.cache.lucene.PageableLuceneQueryResults;
> import org.apache.geode.cache.lucene.internal.LuceneIndexImpl;
> import org.apache.geode.cache.lucene.internal.LuceneServiceImpl;
> import org.apache.geode.pdx.JSONFormatter;
> import org.apache.geode.pdx.PdxInstance;
>
> import java.io.IOException;
> import java.util.HashSet;
> import java.util.LinkedList;
> import java.util.List;
> import java.util.concurrent.TimeUnit;
> import java.util.concurrent.atomic.AtomicInteger;
>
> public class JSONTest {
>   //NOTE: Below is truncated json, single json document can max contain an
> array of col1...col30 (30 diff attributes)
>   // within data.
>   public final static String jsonDoc_2 = "{" +
>   "\"data\":[{" +
>   "\"col1\": {" +
>   "\"k11\": \"aaa\"," +
>   "\"k12\":true," +
>   "\"k13\": ," +
>   "\"k14\": \"2020-12-31:00:00:00\"" +
>   "}," +
>   "\"col2\":[{" +
>   "\"k21\": \"22\"," +
>   "\"k22\": true" +
>   "}]" +
>   "}]" +
>   "}";
>   public final static String jsonDoc_3 = "{" +
>   "\"data\":[{" +
>   "\"col1\": {" +
>   "\"k11\": \"bbb\"," +
>   "\"k12\":true," +
>   "\"k13\": ," +
>   "\"k14\": \"2020-12-31:00:00:00\"" +
>   "}," +
>   "\"col2\":[{" +
>   "\"k21\": \"33\"," +
>   "\"k22\": true" +
>   "}]" +
>   "}]" +
>   "}";
>
>   //NOTE: Col1col30 are mix of JSONObject ({}) and JSONArray  ([]) as
> shown above in jsonDoc_2;
>
>   public final static String REGION_NAME = "REGION_NAME";
>
>   public static void main(String[] args) throws InterruptedException,
> LuceneQueryException {
>
> //create client-cache
> ClientCache cache = new
> ClientCacheFactory().addPoolLocator("localhost",
> 10334).setPdxReadSerialized(true).create();
> Region region = cache.
> PdxInstance>createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY)
> .create(REGION_NAME);
>
> //store json document
> region.put("key", JSONFormatter.fromJSON(jsonDoc_2));
> region.put("key3", JSONFormatter.fromJSON(jsonDoc_3));
>
> LuceneServiceImpl service = (LuceneServiceImpl)
> LuceneServiceProvider.get(cache);
> LuceneIndexImpl index = (LuceneIndexImpl)
> service.getIndex("jsonIndex", "REGION_NAME");
> if (index != null) {
>   service.waitUntilFlushed("jsonIndex", "REGION_NAME", 6,
> TimeUnit.MILLISECONDS);
> }
>
> LuceneQuery query =
> service.createLuceneQueryFactory().create("jsonIndex", "REGION_NAME",
> "22 OR 33", "data.col2.k21");
> System.out.println("Query 22 OR 33");
> HashSet results = getResults(query, "REGION_NAME");
>
> LuceneQuery query2 =
> service.createLuceneQueryFactory().create("jsonIndex", "REGION_NAME",
> "aaa OR xxx OR yyy", "data.col1.k11");
> System.out.println("Query aaa OR xxx OR yyy");
> results = getResults(query2, "REGION_NAME");
>
> // server side:
> // gfsh> start locator
> // gfsh> start server --name=server50505 --server-port=50505
> // gfsh> create lucene index --name=jsonIndex --region=/REGION_NAME
> --field=data.col2.k21,data.col1.k11
> // --serializer=org.apache.geode.cache.lucene.FlatFormatSerializer
> // gfsh> create region --name=REGION_NAME --type=PARTITION
> --redundant-copies=1 --total-num-buckets=61
>
> // How to query json document like,
>
> // 1. select col2.k21, col1, col20 from /REGION_NAME where
> //data.col2.k21 = '22' OR data.col2.k21 = '33'
>
> // 2. select col2.k21, col1.k11, col1 from /REGION_NAME where
> // data.col1.k11 in ('aaa', 'xxx', 'yyy')
>   }
>
>   private static HashSet getResults(LuceneQuery query, String regionName)
> throws LuceneQueryException {
> if (query == null) {
>   return null;
> }
>
> PageableLuceneQueryResults results = query.findPages();
> if (results.size() > 0) {
>   System.out.println("Search found " + results.size() + " results in "
> + regionName + ", page size is " + query.getPageSize());
> }
>
> HashS