nickva commented on PR #5625:
URL: https://github.com/apache/couchdb/pull/5625#issuecomment-3216239859
> Nice work! Is there a break even point where things get worser? Can you
run those tests on smaller clusters too?
@big-r81 I found a 3 node cluster with 64Gb of ram each. Since it's a
smaller cluster I ran with Q=16.
```
const db = 'k6db'
const q = '16'
const batch_size = 1000;
const doc_size = 64;
const dbs = 1
const docs = 1000000;
```
With the same k6 script as above
#### MAIN ####
```
HTTP
http_req_duration..............: avg=1.52s min=4.5ms med=1.51s
max=7.29s p(90)=3.02s p(95)=3.18s
{ expected_response:true }...: avg=1.52s min=4.5ms med=1.51s
max=7.29s p(90)=3.02s p(95)=3.18s
{ name:bulk_docs }...........: avg=90.65ms min=72.39ms med=89.05ms
max=543.13ms p(90)=100.43ms p(95)=103.92ms
{ name:get_doc }.............: avg=2.15s min=4.5ms med=2.65s
max=7.29s p(90)=3.26s p(95)=3.39s
{ name:post_doc }............: avg=989.49ms min=13.98ms med=826.58ms
max=3.35s p(90)=2.05s p(95)=2.33s
{ name:put_doc }.............: avg=1.44s min=10.49ms med=1.68s
max=3.46s p(90)=2.51s p(95)=2.67s
http_req_failed................: 0.17% 5672 out of 3237326
http_reqs......................: 3237326 4353.793364/s
EXECUTION
iteration_duration.............: avg=5.59s min=1.07s med=6.41s
max=9.72s p(90)=7.85s p(95)=8.06s
iterations.....................: 1078774 1450.81437/s
vus............................: 0 min=0 max=10000
vus_max........................: 10000 min=10000 max=10000
NETWORK
data_received..................: 2.1 GB 2.8 MB/s
data_sent......................: 596 MB 801 kB/s
running (12m23.6s), 00000/10000 VUs, 1078774 complete and 0 interrupted
iterations
```
#### PR ####
```
HTTP
http_req_duration..............: avg=1.1s min=3.53ms med=1.07s
max=4.71s p(90)=2s p(95)=2.16s
{ expected_response:true }...: avg=1.1s min=3.53ms med=1.07s
max=4.71s p(90)=1.99s p(95)=2.16s
{ name:bulk_docs }...........: avg=74.84ms min=63.94ms med=72.57ms
max=393.01ms p(90)=82.32ms p(95)=85.67ms
{ name:get_doc }.............: avg=624.54ms min=3.53ms med=664.09ms
max=4.03s p(90)=1.24s p(95)=1.35s
{ name:post_doc }............: avg=1.12s min=13.98ms med=1.01s
max=4.71s p(90)=1.9s p(95)=2.09s
{ name:put_doc }.............: avg=1.55s min=17.55ms med=1.65s
max=4.67s p(90)=2.18s p(95)=2.29s
http_req_failed................: 0.14% 5896 out of 4179620
http_reqs......................: 4179620 5757.994289/s
EXECUTION
iteration_duration.............: avg=4.31s min=1.09s med=4.67s
max=8.41s p(90)=5.25s p(95)=5.41s
iterations.....................: 1392872 1918.870381/s
vus............................: 1699 min=0 max=10000
vus_max........................: 10000 min=10000 max=10000
NETWORK
data_received..................: 2.7 GB 3.7 MB/s
data_sent......................: 738 MB 1.0 MB/s
running (12m05.9s), 00000/10000 VUs, 1392872 complete and 0 interrupted
iterations
```
```
{
"full": {
"value": 0,
"type": "counter",
"desc": "number of times bt_engine cache was full"
},
"hits": {
"value": 10302337,
"type": "counter",
"desc": "number of bt_engine cache hits"
},
"misses": {
"value": 2212648,
"type": "counter",
"desc": "number of bt_engine cache misses"
}
}
```
It held up pretty well. The metric graphs show improvements in throughput,
latency and rate of requests:
<img width="836" height="665" alt="btree_3node_cluster"
src="https://github.com/user-attachments/assets/10cda272-2e28-4cf9-afc9-8cfd439a6b84"
/>
(The first grey bump is the bulk_load phase)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]