bq: Will docValues help with memory usage?
'm still a bit fuzzy on all the ramifications of DocValues, but I
somewhat doubt they'll result in index size savings, they _really_
help with loading the values for a field, but the end result is still
the values in memory
People who know what they'
Hi Erick,
Thanks for the tip.
Will docValues help with memory usage? It seemed a bit complicated to set
up..
The index size saving was nice because that means that potentially I could
use smaller provisioned IOP volumes which cost less...
Thanks.
On 3 May 2013 18:27, Erick Erickson wrote:
Anette:
Be a little careful with the index size savings, they really don't
mean much for _searching_. The sotred field compression
significantly reduces the size on disk, but only for the stored
data which is only accessed when returning the top N docs. In
terms of how many docs you can fit on you
On 5/3/2013 3:22 AM, Annette Newton wrote:
> One question Shawn - did you ever get any costings around Zing? Did you
> trial it?
I never did do a trial. I asked them for a cost and they didn't have an
immediate answer, wanted to do a phone call and get a lot of information
about my setup. The pr
One question Shawn - did you ever get any costings around Zing? Did you
trial it?
Thanks.
On 3 May 2013 10:03, Annette Newton wrote:
> Thanks Shawn.
>
> I have played around with Soft Commits before and didn't seem to have any
> improvement, but with the current load testing I am doing I will
Thanks Shawn.
I have played around with Soft Commits before and didn't seem to have any
improvement, but with the current load testing I am doing I will give it
another go.
I have researched docValues and came across the fact that it would increase
the index size. With the upgrade to 4.2.1 the i
On 5/2/2013 4:24 AM, Annette Newton wrote:
> Hi Shawn,
>
> Thanks so much for your response. We basically are very write intensive
> and write throughput is pretty essential to our product. Reads are
> sporadic and actually is functioning really well.
>
> We write on average (at the moment) 8-1
Hi Shawn,
Thanks so much for your response. We basically are very write intensive
and write throughput is pretty essential to our product. Reads are
sporadic and actually is functioning really well.
We write on average (at the moment) 8-12 batches of 35 documents per
minute. But we really will
On 5/1/2013 8:42 AM, Annette Newton wrote:
It was a single delete with a date range query. We have 8 machines each
with 35GB memory, 10GB is allocated to the JVM. Garbage collection has
always been a problem for us with the heap not clearing on Full garbage
collection. I don't know what is bei
Hi Shawn
Thanks for the reply.
It was a single delete with a date range query. We have 8 machines each
with 35GB memory, 10GB is allocated to the JVM. Garbage collection has
always been a problem for us with the heap not clearing on Full garbage
collection. I don't know what is being held in m
On 5/1/2013 3:39 AM, Annette Newton wrote:
> We have a 4 shard - 2 replica solr cloud setup, each with about 26GB of
> index. A total of 24,000,000. We issued a rather large delete yesterday
> morning to reduce that size by about half, this resulted in the loss of all
> shards while the delete wa
11 matches
Mail list logo