On Thu, 2014-09-25 at 06:29 +0200, Norgorn wrote:
> I can't say for sure, cause filter caches are out of the JVM (dat HS), but
> top shows 5 GB cached and no free RAM.
The cached reported from top should be correct, no matter if one used
off-heap or not: You have 5GB for (I guess) 300MB index, so
On 9/24/2014 2:18 AM, Toke Eskildsen wrote:
> Norgorn [lsunnyd...@mail.ru] wrote:
>> I have CLOUD with 3 nodes and 16 MB RAM on each.
>> My index is about 1 TB and search speed is awfully bad.
>
> We all have different standard with regards to search performance. What is
> "awfully bad" and what
about that, or big disk cache is enough?
And does "optimized index" mean SOLR "optimize" command, or something else?
Anyway, your previous answers are really greate, so don't spend time, if u
don't have much to)
--
View this message in context:
http://lucene.47206
Norgorn [lsunnyd...@mail.ru] wrote:
> Collection contains about billion of documents.
So 3-400M documents per core. That is a challenge with frequent updates and
facets, but with your simple queries it should be doable.
> At the end, I want to reach several seconds per search query (for not cach
ge in context:
http://lucene.472066.n3.nabble.com/SlrCloud-RAM-requirments-tp4160853p4160891.html
Sent from the Solr - User mailing list archive at Nabble.com.
Norgorn [lsunnyd...@mail.ru] wrote:
> I have CLOUD with 3 nodes and 16 MB RAM on each.
> My index is about 1 TB and search speed is awfully bad.
We all have different standard with regards to search performance. What is
"awfully bad" and what is "good enough" for you?
Related to this: How many d
I can try to make index smaller, and I'll do that, but I need to know
how much RAM is enough and if there are some magic ways to make the things
better.
SOLR spec is hs_0.06
--
View this message in context:
http://lucene.472066.n3.nabble.com/SlrCloud-RAM-requirments-tp4160853.html
Sen