On Thu, 2014-09-25 at 06:29 +0200, Norgorn wrote:
> I can't say for sure, cause filter caches are out of the JVM (dat HS), but
> top shows 5 GB cached and no free RAM.
The cached reported from top should be correct, no matter if one used
off-heap or not: You have 5GB for (I guess) 300MB index, so
On 9/24/2014 2:18 AM, Toke Eskildsen wrote:
> Norgorn [lsunnyd...@mail.ru] wrote:
>> I have CLOUD with 3 nodes and 16 MB RAM on each.
>> My index is about 1 TB and search speed is awfully bad.
>
> We all have different standard with regards to search performance. What is
> "awfully bad" and what
Thanks again.
I'd answered before properly reading your post, my apologizes.
I can't say for sure, cause filter caches are out of the JVM (dat HS), but
top shows 5 GB cached and no free RAM.
The only question for me now is how to balance disk cache and filter cache?
Do I need to worry about that,
Norgorn [lsunnyd...@mail.ru] wrote:
> Collection contains about billion of documents.
So 3-400M documents per core. That is a challenge with frequent updates and
facets, but with your simple queries it should be doable.
> At the end, I want to reach several seconds per search query (for not cach
Thanks for your reply.
Collection contains about billion of documents.
I'm using most of all simple queries with date and other filters (5 filters
per query).
Yup, disks are cheapest and simplest.
At the end, I want to reach several seconds per search query (for not cached
query =) ), so, please,
Norgorn [lsunnyd...@mail.ru] wrote:
> I have CLOUD with 3 nodes and 16 MB RAM on each.
> My index is about 1 TB and search speed is awfully bad.
We all have different standard with regards to search performance. What is
"awfully bad" and what is "good enough" for you?
Related to this: How many d