Hi All
We have an index of ~2,000,000,000 Documents and the query and facet times
are too slow for us.
Before using the shards solution for improving performance, we thought
about using the multicore feature (our goal is to maximize performance for
a single machine).
Most of our queries will be limited by time, hence we want to partition the
data by date/time.
We want to partition the data because the index size is too big and doesn't
fit into memory (80 Gb's).

1. Is multi core the best way to implement my requirement?
2. I noticed there are some LOAD / UNLOAD actions on a core - should i use
these action when managing my cores? if so how can i LOAD a core that i
have unloaded
for example:
I have 7 partitions / cores - one for each day of the week
In most cases I will search for documents only on the last day core.
Once every 10000 queries I need documents from all cores.
Question: Do I need to unload all of the old cores and then load them on
demand (when i see i need data from these cores)?
3. If the question to the last answer is no, how do i ensure that only
cores that are loaded into memory are the ones I want?

Thanks
Yuval

Reply via email to