Hi Eric, Toke,
Can you please look at the details shared in my trail email & respond with your
suggestions/feedback?
Thanks & Regards,
Vinodh
From: Kommu, Vinodh K.
Sent: Monday, July 6, 2020 4:58 PM
To: solr-user@lucene.apache.org
Subject: RE: Time-out errors while indexing (So
de1/solr/Collection1_shard1_replica_n20
209G node2/solr/Collection1_shard4_replica_n16
1.3T total
Thanks & Regards,
Vinodh
-Original Message-
From: Erick Erickson
Sent: Saturday, July 4, 2020 7:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Time-out errors whi
Thank a lot for your inputs and suggestions, even I was thinking similar like
creating another collection of the same ( hot and cold), and moving documents
which are older than certain days like 180 days from original collection (hot)
to new collection(cold).
Thanks,
Madhava
Sent from my iPh
You need more shards. And, I’m pretty certain, more hardware.
You say you have 13 billion documents and 6 shards. Solr/Lucene has an absolute
upper limit of 2B (2^31) docs per shard. I don’t quite know how you’re running
at all unless that 13B is a round number. If you keep adding documents, you
Hi Eric,
There are total 6 VM’s in Solr clusters and 2 nodes are running on each VM.
Total number of shards are 6 with 3 replicas. I can see the index size is more
than 220GB on each node for the collection where we are facing the performance
issue.
The more documents we add to the collection
Oops, I transposed that. If your index is a terabyte and your RAM is 128M,
_that’s_ a red flag.
> On Jul 3, 2020, at 5:53 PM, Erick Erickson wrote:
>
> You haven’t said how many _shards_ are present. Nor how many replicas of the
> collection you’re hosting per physical machine. Nor how large t
You haven’t said how many _shards_ are present. Nor how many replicas of the
collection you’re hosting per physical machine. Nor how large the indexes are
on disk. Those are the numbers that count. The latter is somewhat fuzzy, but if
your aggregate index size on a machine with, say, 128G of mem
Hi Eric,
The collection has almost 13billion documents with each document around 5kb
size, all the columns around 150 are the indexed. Do you think that number of
documents in the collection causing this issue. Appreciate your response.
Regards,
Madhava
Sent from my iPhone
> On 3 Jul 2020, a
If you’re seeing low CPU utilization at the same time, you probably
just have too much data on too little hardware. Check your
swapping, how much of your I/O is just because Lucene can’t
hold all the parts of the index it needs in memory at once? Lucene
uses MMapDirectory to hold the index and you
On Thu, 2020-07-02 at 11:16 +, Kommu, Vinodh K. wrote:
> We are performing QA performance testing on couple of collections
> which holds 2 billion and 3.5 billion docs respectively.
How many shards?
> 1. Our performance team noticed that read operations are pretty
> more than write operati
Anyone has any thoughts or suggestions on this issue?
Thanks & Regards,
Vinodh
From: Kommu, Vinodh K.
Sent: Thursday, July 2, 2020 4:46 PM
To: solr-user@lucene.apache.org
Subject: Time-out errors while indexing (Solr 7.7.1)
Hi,
We are performing QA performance testing on couple of collections w
11 matches
Mail list logo