On 8/15/2019 8:14 AM, Kojo wrote:
I am starting to think that my setup has more than one problem.
As I said before, I am not balancing my load to Solr nodes, and I have
eight nodes. All of my web application requests go to one Solr node, the
only one that dies. If I distribute the load across the
Ere,
thanks for the advice. I don´t have this specific use case, but I am doing
some operations that I think could be risky, due to the first time I am
using.
There is a page that groups by one specific attribute of documents
distributed accros shards. I am using Composite ID to allow grouping
cor
Does your web application, by any chance, allow deep paging or something
like that which requires returning rows at the end of a large result
set? Something like a query where you could have parameters like
&rows=10&start=100 ? That can easily cause OOM with Solr when using
a sharded index. It
Erick,
I am using Python, so I think SolrJ is not an option. I wrote my libs to
connect to Solr and interpret Solr data.
I will try to load balance via Apache that is in front of Solr, before I
change my setup, I think it will be simpler. I was not aware about the
single point of failure on Solr C
OK, if you’re sending HTTP requests to a single node, that’s
something of an anti-pattern unless it’s a load balancer that
sends request to random nodes in the cluster. Do note that
even if you do send all http requests to one node, the top-level
request will be forwarded to other nodes in the clus
Erick,
I am starting to think that my setup has more than one problem.
As I said before, I am not balancing my load to Solr nodes, and I have
eight nodes. All of my web application requests go to one Solr node, the
only one that dies. If I distribute the load across the other nodes, is it
possible
Kojo:
On the surface, this is a reasonable configuration. Note that you may still
want to decrease the Java heap, but only if you have enough “head room” for
memory spikes.
How do you know if you have “head room”? Unfortunately the only good answer is
“you have to test”. You can look at the GC
Shawn,
Only my web application access this solr. at a first look at http server
logs I didnt find something different. Sometimes I have a very big crawler
access to my servers, this was my first bet.
No scheduled crons running at this time too.
I think that I will reconfigure my boxes with two
On 8/13/2019 9:28 AM, Kojo wrote:
Here are the last two gc logs:
https://send.firefox.com/download/6cc902670aa6f7dd/#Ee568G9vUtyK5zr-nAJoMQ
Thank you for that.
Analyzing the 20MB gc log actually looks like a pretty healthy system.
That log covers 58 hours of runtime, and everything looks ver
Shawn,
Here are the last two gc logs:
https://send.firefox.com/download/6cc902670aa6f7dd/#Ee568G9vUtyK5zr-nAJoMQ
Thank you,
Koji
Em ter, 13 de ago de 2019 às 09:33, Shawn Heisey
escreveu:
> On 8/13/2019 6:19 AM, Kojo wrote:
> > --
> > tail -f node1/logs/solr_oom_killer-8983-2019
On 8/13/2019 6:19 AM, Kojo wrote:
--
tail -f node1/logs/solr_oom_killer-8983-2019-08-11_22_57_56.log
Running OOM killer script for process 38788 for Solr on port 8983
Killed process 38788
--
Based on what I can see, a 6GB heap is not big enough for the setup
you've got
Erick and Shawn,
thank you very much for the very usefull information.
When I start to move from sigle Solr to cloud, I was planning to use the
cluster for very large collections.
But the collection that I said, will not grow that much, so I will downsize
shards.
Thanks for the information abou
On 8/12/2019 5:47 AM, Kojo wrote:
I am using Solr cloud on this configuration:
2 boxes (one Solr in each box)
4 instances per box
Why are you running multiple instances on one server? For most setups,
this has too much overhead. A single instance can handle many indexes.
The only good reas
Kojo:
The solr logs should give you a much better idea of what the triggering event
was.
Just increasing the heap doesn’t guarantee much, again the Solr logs will
report the OOM exception if it’s memory-related. You haven’t told us what your
physical RAM is nor how much you’re allocating to he
1) Depends on your document routing strategy. It sounds like you could
be using the compositeId strategy and if so, there's still a hash
range assigned to each shard, so you can split the big shards into
smaller shards.
2) Since you're replicating in 2 places, when one of your servers
crash, there
Some of our clients have been using it. It still has a few problems as you
> can see in jira, but nothing major.
>
Same here.
> Otis
> --
> Performance Monitoring - http://sematext.com/spm
> On Oct 22, 2012 9:18 PM, "Mark" wrote:
>
> > I have a few questions regarding Solr Cloud. I've been foll
Some of our clients have been using it. It still has a few problems as you
can see in jira, but nothing major.
Otis
--
Performance Monitoring - http://sematext.com/spm
On Oct 22, 2012 9:18 PM, "Mark" wrote:
> I have a few questions regarding Solr Cloud. I've been following it for
> quite some ti
17 matches
Mail list logo