On 8/15/2019 8:14 AM, Kojo wrote:
I am starting to think that my setup has more than one problem.
As I said before, I am not balancing my load to Solr nodes, and I have
eight nodes. All of my web application requests go to one Solr node, the
only one that dies. If I distribute the load across the
Ere,
thanks for the advice. I don´t have this specific use case, but I am doing
some operations that I think could be risky, due to the first time I am
using.
There is a page that groups by one specific attribute of documents
distributed accros shards. I am using Composite ID to allow grouping
cor
Does your web application, by any chance, allow deep paging or something
like that which requires returning rows at the end of a large result
set? Something like a query where you could have parameters like
&rows=10&start=100 ? That can easily cause OOM with Solr when using
a sharded index. It
Erick,
I am using Python, so I think SolrJ is not an option. I wrote my libs to
connect to Solr and interpret Solr data.
I will try to load balance via Apache that is in front of Solr, before I
change my setup, I think it will be simpler. I was not aware about the
single point of failure on Solr C
OK, if you’re sending HTTP requests to a single node, that’s
something of an anti-pattern unless it’s a load balancer that
sends request to random nodes in the cluster. Do note that
even if you do send all http requests to one node, the top-level
request will be forwarded to other nodes in the clus
Erick,
I am starting to think that my setup has more than one problem.
As I said before, I am not balancing my load to Solr nodes, and I have
eight nodes. All of my web application requests go to one Solr node, the
only one that dies. If I distribute the load across the other nodes, is it
possible
Kojo:
On the surface, this is a reasonable configuration. Note that you may still
want to decrease the Java heap, but only if you have enough “head room” for
memory spikes.
How do you know if you have “head room”? Unfortunately the only good answer is
“you have to test”. You can look at the GC
Shawn,
Only my web application access this solr. at a first look at http server
logs I didnt find something different. Sometimes I have a very big crawler
access to my servers, this was my first bet.
No scheduled crons running at this time too.
I think that I will reconfigure my boxes with two
On 8/13/2019 9:28 AM, Kojo wrote:
Here are the last two gc logs:
https://send.firefox.com/download/6cc902670aa6f7dd/#Ee568G9vUtyK5zr-nAJoMQ
Thank you for that.
Analyzing the 20MB gc log actually looks like a pretty healthy system.
That log covers 58 hours of runtime, and everything looks ver
Shawn,
Here are the last two gc logs:
https://send.firefox.com/download/6cc902670aa6f7dd/#Ee568G9vUtyK5zr-nAJoMQ
Thank you,
Koji
Em ter, 13 de ago de 2019 às 09:33, Shawn Heisey
escreveu:
> On 8/13/2019 6:19 AM, Kojo wrote:
> > --
> > tail -f node1/logs/solr_oom_killer-8983-2019
On 8/13/2019 6:19 AM, Kojo wrote:
--
tail -f node1/logs/solr_oom_killer-8983-2019-08-11_22_57_56.log
Running OOM killer script for process 38788 for Solr on port 8983
Killed process 38788
--
Based on what I can see, a 6GB heap is not big enough for the setup
you've got
Erick and Shawn,
thank you very much for the very usefull information.
When I start to move from sigle Solr to cloud, I was planning to use the
cluster for very large collections.
But the collection that I said, will not grow that much, so I will downsize
shards.
Thanks for the information abou
On 8/12/2019 5:47 AM, Kojo wrote:
I am using Solr cloud on this configuration:
2 boxes (one Solr in each box)
4 instances per box
Why are you running multiple instances on one server? For most setups,
this has too much overhead. A single instance can handle many indexes.
The only good reas
Kojo:
The solr logs should give you a much better idea of what the triggering event
was.
Just increasing the heap doesn’t guarantee much, again the Solr logs will
report the OOM exception if it’s memory-related. You haven’t told us what your
physical RAM is nor how much you’re allocating to he
Hi,
I am using Solr cloud on this configuration:
2 boxes (one Solr in each box)
4 instances per box
At this moment I have an active collections with about 300.000 docs. The
other collections are not being queried. The acctive collection is
configured:
- shards: 16
- replication factor: 2
These t
gt; into the new server.
>
> 3) Are there any monitoring tools for monitoring solr cloud, I have looked
> at SPM by sematext and new relic , but am having issues with both the tools.
>
> Thanks,
> Aditya
>
>
>
> --
> View this message in context:
> http://lu
issues with both the tools.
Thanks,
Aditya
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Cloud-Questions-tp4081441.html
Sent from the Solr - User mailing list archive at Nabble.com.
Some of our clients have been using it. It still has a few problems as you
> can see in jira, but nothing major.
>
Same here.
> Otis
> --
> Performance Monitoring - http://sematext.com/spm
> On Oct 22, 2012 9:18 PM, "Mark" wrote:
>
> > I have a few questions regarding Solr Cloud. I've been foll
Some of our clients have been using it. It still has a few problems as you
can see in jira, but nothing major.
Otis
--
Performance Monitoring - http://sematext.com/spm
On Oct 22, 2012 9:18 PM, "Mark" wrote:
> I have a few questions regarding Solr Cloud. I've been following it for
> quite some ti
I have a few questions regarding Solr Cloud. I've been following it for quite
some time but I believe it wasn't ever production ready. I see that with the
release of 4.0 it's considered stable… is that the case? Can anyone out there
share your experiences with Solr Cloud in a production environm
Great! Thank you. I'm eager to test it on EC2 whenever its near beta ready.
On 10/13/2011 11:51 AM, Ted Dunning wrote:
On Thu, Oct 13, 2011 at 1:37 PM, wrote:
Hi,
I have some questions about the 4.0 solr cloud implementation.
1. I want to have a large cloud of machines on a network. each m
On Thu, Oct 13, 2011 at 1:37 PM, wrote:
>
> Hi,
> I have some questions about the 4.0 solr cloud implementation.
>
> 1. I want to have a large cloud of machines on a network. each machine
> will process data and write to its "local" solr server (node,shard or
> whatever). This is necessary becau
Hi,
I have some questions about the 4.0 solr cloud implementation.
1. I want to have a large cloud of machines on a network. each machine
will
process data and write to its "local" solr server (node,shard or
whatever).
This is necessary because it won't be possible to have 100 machines with
100
On 9/30/2011 12:26 PM, Pulkit Singhal wrote:
> SOLR-2355 is definitely a step in the right direction but something I
> would like to get clarified:
Questions about SOLR-2355 are best asked in SOLR-2355 :)
> b) Does this basic implementation distribute across shards or across
> cores?
>From a bri
Thanks Pulkit!
I'd actually been meaning to add the post.jar commands needed to index a doc to
each shard to the wiki. Waiting till I streamline a few things though.
- Mark
On Sep 30, 2011, at 12:35 PM, Pulkit Singhal wrote:
> BTW I update the wiki with the following, hope it keeps it simpel f
BTW I update the wiki with the following, hope it keeps it simpel for
others starting out:
Example B: Simple two shard cluster with shard replicas
Note: This setup leverages copy/paste to setup 2 cores per shard and
distributed searches validate a succesful completion of this
example/exercise. But
SOLR-2355 is definitely a step in the right direction but something I
would like to get clarified:
a) There were some fixes to it that went on the 3.4 & 3.5 branch based
on the comments section ... are they not available or not needed on
4.x trunk?
b) Does this basic implementation distribute acr
2011/9/29 Yury Kats :
> True, but there is a big gap between goals and current state.
> Right now, there is distributed search, but not distributed indexing
> or auto-sharding, or auto-replication. So if you want to use the SolrCloud
> now (as many of us do), you need do a number of things yourself
Agree. Thanks also for clarifying. It helps.
On 09/29/2011 08:50 AM, Yury Kats wrote:
On 9/29/2011 7:22 AM, Darren Govoni wrote:
That was kinda my point. The "new" cloud implementation
is not about replication, nor should it be. But rather about
horizontal scalability where "nodes" manage diffe
On 9/29/2011 7:22 AM, Darren Govoni wrote:
> That was kinda my point. The "new" cloud implementation
> is not about replication, nor should it be. But rather about
> horizontal scalability where "nodes" manage different parts
> of a unified index.
It;s about many things. You stated one, but there
That was kinda my point. The "new" cloud implementation
is not about replication, nor should it be. But rather about
horizontal scalability where "nodes" manage different parts
of a unified index. One of the design goals of the "new" cloud
implementation is for this to happen more or less automati
@Darren: I feel that the question itself is misleading. Creating
shards is meant to separate out the data ... not keep the exact same
copy of it.
I think the two node setup that was attempted by Sam mislead him and
us into thinking that configuring two nodes which are to be named
"shard1" ... some
On 9/27/2011 5:16 PM, Darren Govoni wrote:
> On 09/27/2011 05:05 PM, Yury Kats wrote:
>> You need to either submit the docs to both nodes, or have a replication
>> setup between the two. Otherwise they are not in sync.
> I hope that's not the case. :/ My understanding (or hope maybe) is that
> the
On 09/27/2011 05:05 PM, Yury Kats wrote:
You need to either submit the docs to both nodes, or have a replication
setup between the two. Otherwise they are not in sync.
I hope that's not the case. :/ My understanding (or hope maybe) is that
the new Solr Cloud implementation will support auto-shar
Hi all
I'm a relatively new solr user, and recently I discovered the interesting
solr cloud feature. I have some basic questions:
(please excuse me if I get the terminologies wrong)
- from my understanding, this is still a work in progress. How mature is it?
Is there any estimate on the official
35 matches
Mail list logo