Ideally we could get good approximates for all of them, including any of
our custom caches (of which we have about five). The RAM size estimator
spreadsheet [1] is helpful but we'd love to get accurate live size metrics.
[1]
https://github.com/apache/lucene-solr/blob/trunk/dev-tools/size-estimator
_which_ solrCache objects? filterCache? result cache? documentcache?
result cache is about "average size of a query" + "window size *
sizeof int) for each entry.
filter cache is about "average size of a filter query" + maxdoc/8
document cacha is about "average size of the stored fields in bytes" *
We'd like to graph the approximate RAM size of our SolrCache instances. Our
first attempt at doing this was to use the Lucene RamUsageEstimator [1].
Unfortunately, this appears to give a bogus result. Every instance of
FastLRUCache was judged to have the same exact size, down to the byte. I
assume
On 3/24/2014 9:48 AM, David Flower wrote:
Its not saw toothing though it’s sitting solidly at 52%
It may be very difficult to see the sawtooth effect unless you actually
connect an app like jconsole to your running Solr instance and watch the
graphs over time.
My point was that what you've
Its not saw toothing though it’s sitting solidly at 52%
On 24/03/2014 15:46, "Shawn Heisey" wrote:
>> I¹m looking at dashboard page on all 4 nodes and seeing
>> Physical Memory 92% compared with ~41-44%
>>
>> And JVM-Memory 52.9% compared to 23-28%
>>
>> The reason I mentioned slave is that on t
> I¹m looking at dashboard page on all 4 nodes and seeing
> Physical Memory 92% compared with ~41-44%
>
> And JVM-Memory 52.9% compared to 23-28%
>
> The reason I mentioned slave is that on the core overview page there is
> An entry for Slave (Searching) that doesn¹t appear on any of the other
> no
use twice the ram that the others are using within the cluster
>>
>> The only difference we can spot between the node is that the one with
>>the
>> ram usage is saying its a slave while all the other are reporting that
>> they are masters
>
>If you are using Sol
nce we can spot between the node is that the one with the
> ram usage is saying its a slave while all the other are reporting that
> they are masters
If you are using SolrCloud, then there are no masters and no slaves.
Each shard has a leader, but that is not a permanent role.
The master a
r with a collection thats sharded into 2 and each
>> shard having a master and a slave for redundancy however 1 node has
>>decied
>> to use twice the ram that the others are using within the cluster
>>
>> The only difference we can spot between the node is that the
o use twice the ram that the others are using within the cluster
>
> The only difference we can spot between the node is that the one with the
> ram usage is saying its a slave while all the other are reporting that
> they are masters
>
>
>
> Does any one have any ideas why this has occurred
>
> Cheers,
> David
>
>
>
ram usage is saying its a slave while all the other are reporting that
they are masters
Does any one have any ideas why this has occurred
Cheers,
David
lrDispatchFilter.java:345)
> >
> >
> > I use Nutch (that uses Hadoop) to send documents from Hbase to Solr. I am
> > not indexing documents at Hadoop. I just send documents via Map/Reduce
> jobs
> > into my SolrCloud. Nutch sends documents as like that:
> >
> &
On 9/9/2013 10:35 AM, P Williams wrote:
Is it odd that my index is ~16GB but top shows 30GB in virtual memory?
Would the extra be for the field and filter caches I've increased in size?
This should probably be a new thread, but it might have some
applicability here, so I'm replying.
I have
gt; not indexing documents at Hadoop. I just send documents via Map/Reduce
> jobs
> > into my SolrCloud. Nutch sends documents as like that:
> >
> > ...
> > SolrServer solr = new CommonsHttpSolrServer(solrUrl);
> > ...
> > private final List inputDocs = new
&g
job could not send documents into SolrCloud and stops to send
> documents into Solr (Hadoop job fails) When I open my Solr Adming Page I
> see that:
>
> Physical Memory 98.1%
> Swap Space NaN%
> File Descriptor Count 2.5%
> JVM-Memory 1.6%
>
> All in all I think that probl
does not goes down). My machine
uses CentOS 6.4. Should I drop caches when percentage goes up or what do
you do for such kind of situations?
2013/8/24 Erick Erickson
> This is sounding like an XY problem. What are you measuring
> when you say RAM usage is 99%? is this virtual memo
This is sounding like an XY problem. What are you measuring
when you say RAM usage is 99%? is this virtual memory? See:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
What errors are you seeing when you say: "my node stops to receiving
documents"?
How are you s
I make a test at my SolrCloud. I try to send 100 millions documents into my
node which has no replica via Hadoop. When document count send to that node
is around 30 millions, RAM usage of my machine becomes 99% (Solr Heap Usage
is not 99%, it uses just 3GB - 4GB of RAM). After a time later my node
On 7/29/2013 1:12 AM, Furkan KAMACI wrote:
> When I look at my dashboard I see that 27.30 GB available for JVM, 24.77
> GB is gray and 16.50 GB is black. I don't do anything on my machine right
> now. Did it cache documents or is there any problem, how can I learn it?
This is simple information
When I look at my dashboard I see that 27.30 GB available for JVM, 24.77
GB is gray and 16.50 GB is black. I don't do anything on my machine right
now. Did it cache documents or is there any problem, how can I learn it?
On 12/13/2010 9:46 PM, Cameron Hurst wrote:
When i start the server I am using about 90MB of RAM which is fine and
from the google searches I found that is normal. The issue comes when I
start indexing data. In my solrconf.xml file that my maximum RAM buffer
is 32MB. In my mind that means that th
Several observations:
1> If by RAM buffer size you're referring to the value in solrconfig.xml,
,
that is a limit on the size of the internal buffer while indexing. When
that limit is reached
the data is flushed to disk. It is irrelevant to searching.
2> When you run searches, various inter
hello all,
I am a new user to Solr and I am having a few issues with the setup and
wondering if anyone had some suggestions. I am currently running this as
just a test environment before I go into production. I am using a
tomcat6 environment for my servlet and solr 1.4.1 as the solr build. I
set u
and 2GB allocated to eden space.
I have caching, autocommit and auto-warming commented out of
solrconfig.xml
After I index 500k docs and call commit/optimize (via URL after indexing
has completed) my RAM usage is only about 1.5GB, but then if I stop
and restart my Solr server over the same data th
;
> After I index 500k docs and call commit/optimize (via URL after indexing
> has completed) my RAM usage is only about 1.5GB, but then if I stop
> and restart my Solr server over the same data the RAM immediately
> jumps to about 4GB and I can't understand why there is a difference
ve caching, autocommit and auto-warming commented out of
solrconfig.xml
After I index 500k docs and call commit/optimize (via URL after indexing
has completed) my RAM usage is only about 1.5GB, but then if I stop
and restart my Solr server over the same data the RAM immediately
jumps to about 4GB a
gt;> postponed GCing, etc.
>>
>> HTH
>> Erick
>>
>> On Mon, Jan 25, 2010 at 10:47 AM, Antonio Lobato > >wrote:
>>
>> Hello everyone!
>>>
>>> I have a question about indexing a large dataset in Solr and ram usage.
>>> I
&
question about indexing a large dataset in Solr and ram
usage. I
am currently indexing about 160 gigabytes of data to a dedicated
indexing
server. The data is constantly being fed to Solr, 24/7. The index
grows as
I prune away old data that is not needed, so the index size stays
in the
of times, along with
suggestions for tracking it down, whether it's just
postponed GCing, etc.
HTH
Erick
On Mon, Jan 25, 2010 at 10:47 AM, Antonio Lobato wrote:
> Hello everyone!
>
> I have a question about indexing a large dataset in Solr and ram usage. I
> am currently inde
Hello everyone!
I have a question about indexing a large dataset in Solr and ram
usage. I am currently indexing about 160 gigabytes of data to a
dedicated indexing server. The data is constantly being fed to Solr,
24/7. The index grows as I prune away old data that is not needed, so
http://issues.apache.org/jira/browse/SOLR-392
Summary:
It would be good for end-user applications if Solr allowed searches to
cease before finishing, and still return partial results.
31 matches
Mail list logo