elivis wrote
> See:
> https://lucene.472066.n3.nabble.com/SolrServerException-Timeout-occured-while-waiting-response-from-server-tc4464632.html
>
> Maybe this will help somebody. I was dealing with exact same problem. We
> are
> running on VMs, and all of our timeout problems went away after we
>
See:
https://lucene.472066.n3.nabble.com/SolrServerException-Timeout-occured-while-waiting-response-from-server-tc4464632.html
Maybe this will help somebody. I was dealing with exact same problem. We are
running on VMs, and all of our timeout problems went away after we switched
from a 5yo VmWare
Hey Karl,
Can you elaborate more about your system? How many shards does your
collection have, what is replica type? Are you using an external zookeeper?
Its looks like (from logs) that you are running solr on SolrCloud mode?
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.h
It really sounds like you're re-inventing SolrCloud, but
you know your requirements best.
Erick
On Wed, Nov 2, 2016 at 8:48 PM, Kent Mu wrote:
> Thanks Erick!
> Actually, similar to solrcloud, we split our data to 8 customized shards(1
> master with 4 slaves), and each with one ctrix and two apa
Thanks Erick!
Actually, similar to solrcloud, we split our data to 8 customized shards(1
master with 4 slaves), and each with one ctrix and two apache web server to
reduce server pressure through load balancing.
As we are running an e-commerce site, the number of reviews of selling
products grows v
My 2 cents (rounded):
Quote: "the size of our index data is more than 30GB every year now”
- is it the size of *data* or the size of *index*? This is super important!
You can have petabytes of data, growing terabytes a year, and your index files
will grow only few gigabytes a year at most.
Not
You need to move to SolrCloud when it's
time to shard ;).
More seriously, at some point simply adding more
memory will not be adequate. Either your JVM
heap will to grow to a point where you start encountering
GC pauses or the time to serve requests will
increase unacceptably. "when?" you ask?
Thanks, I got it, Erick!
the size of our index data is more than 30GB every year now, and it is
still growing up, and actually our solr now is running on a virtual
machine. so I wonder if we need to deploy solr in a physical machine, or I
can just upgrade the physical memory of our Virtual machine
Kent: OK, I see now. Then a minor pedantic point...
It'll avoid confusion if you use master and slaves
rather than master and replicas when talking about
non-cloud setups.
The equivalent in SolrCloud is leader and replicas.
No big deal either way, just FYI.
Best,
Erick
On Tue, Nov 1, 2016 at 8
Thanks a lot for your reply, Shawn!
no other applications on the server, I agree with you that we need to
upgrade physical memory, and allocate the reasonable jvm size, so that the
operating system have spare memory available to cache the index.
actually, we have nearly 100 million of data every
well, we do not use solrcloud, just simple solr deployment - one master
with some relicas. I agree with Shawn's opinion, I think we need to upgrade
the physical memory, and allocate the reasonable jvm size.
Thank you all the same!
Best Regards!
Kent
2016-11-02 4:52 GMT+08:00 Erick Erickson :
>
This is a bit confusing as you're mixing terms from
older master/slave Solr with SolrCloud.
You say "our deployment is one master with 10 replicas"
and
"we index data to the
master, and search data from the replicas via load balancing"
So how are you getting your data to the replicas?
There is n
Quote:
It takes place not often. after analysis, we find that only when the
replicas Synchronous Data from master solr server. it seem that when the
replicas block search requests when synchronizing data from master, is that
true?
Solr makes new searcher available after replication complete,
On 11/1/2016 1:07 AM, Kent Mu wrote:
> Hi friends! We come across an issue when we use the solrj(4.9.1) to
> connect to solr server, our deployment is one master with 10 replicas.
> we index data to the master, and search data from the replicas via
> load balancing. the error stack is as below: *Ti
A timeout like this _probably_ means your docs were indexed just fine. I'm
curious why adding the docs takes so long, how many docs are you sending at
a time?
Best
Erick
On Thu, Mar 21, 2013 at 1:31 PM, Benjamin, Roy wrote:
> I'm calling: m_server.add(docs, 12);
>
> Wondering if the ti
15 matches
Mail list logo