Everybody is good, we ran into two solr problem hope can get help, the
amount of data is large, we one day 24.5 TB, record number 110 billion, we
had used the 49 solr node, the data is not enough memory expanded to more
than 100 cluster, we found that the 60 solrcloud performance and the elder
brot
We found a problem in the solr6.0 version. After solr restarts data
recovery on the zookeeper, the solr collection configuration will be
centralized in the clusterstate.json file. If the number of clusters and
replicas is a bit more, it is very easy to exceed 1M. Has caused a series
of problems suc
x options to truly merge the
> shards, but I wouldn't consider them until the above was proven to be
> unsatisfactory.
>
> In any case, you need to be sure that the machines that remain are
> powerful enough to host all the documents you've put on them.
>
> Best,
&
Hello, everyone, we encountered two solr problems and hoped to get help.
Our data volume is very large, 24.5TB a day, and the number of records is
110 billion. We originally used 49 solr nodes. Because of insufficient
storage, we expanded to 100. For a solr cluster composed of multiple
machines, we
Hello everyone, solr solrhome can only configure a path, but our collection
is very large, there are 2 directories on the os, each directory space is
61TB, if we only start a solr node on a host, then only use 61TB of space
has caused half of the waste; if we start two nodes, we find that io
conten
Hello everyone, I encountered a shard reduction problem with solr. My
current solr cluster is deployed in solrcloud mode. Now I need to use
several solr machines for other purposes. The solr version I use is Solr
6.0. What should I do? Do it, thank you for your help.--
=
A lot of collection time, we also found that there are a lot of time_wait
thread, mainly committed submit thread and search thread, which led to the
rapid decline in the speed of solr, the number of these threads up to more
than 2,000.
I didn't have a solution to this problem, but I found that relo
When we have 49 shards per collection, there are more than 600 collections.
Solr will have serious performance problems. I don't know how to deal with
them. My advice to you is to minimize the number of collections.
Our environment is 49 solr server nodes, each with 32cpu/128g, and the data
volume
doesn't work for you, let us know. But note you need
> > to show us the _entire_ return header to allow anyone to diagnose the
> > problem.
> >
> > Best,
> > Erick
> >
> > On Sat, Mar 10, 2018 at 1:03 PM, spoonerk
> wrote:
> > > I have manually un
hello,We found a problem. In solr 6.0, the indexing speed of solr is
influenced by the number of solr collections. The speed is normal before
the limit is reached. If the limit is reached, the indexing speed will
decrease by 50 times.
In our environment, there are 49 solr nodes. If each collection
the
commit scheduling thread after execution (non-automatic submission) is
still not destroyed, as the collection number increases, the commit
scheduling thread Will be more and more until solrcloud restart it.
2018-03-02 10:15 GMT+08:00 Shawn Heisey :
> On 3/1/2018 4:31 AM, 苗海泉 wrote:
>
>>
thank you
Erick Erickson 于2018年3月1日 周四23:40写道:
> Ant 1.10.1 works as well, it's a bug in ant...
>
> On Mar 1, 2018 06:27, "苗海泉" wrote:
>
> > thank you for reply,i'm try and everything is ok.
> >
> > 2018-03-01 22:06 GMT+08:00 thunderz.solr :
&
thank you for reply,i'm try and everything is ok.
2018-03-01 22:06 GMT+08:00 thunderz.solr :
> 苗海泉 wrote
> > I encountered a problem, when I was in the process of compiling solr6.0
> > source error, I have installed the ant and ivy, and then solr6 source
> code
> >
I jstack through the analysis of our solr thread found that the total
number of threads 1169 of which 1037 threads are the wait state, the 1037
threads, there are two types of threads is very large, 486 threads with
same trace trace searcherExecutor-5694-thread -1,
searcherExecutor-5694-thread-1 -
Thank you for your advice on gc tools, what do you suggest to me?
2018-02-28 23:57 GMT+08:00 Shawn Heisey :
> On 2/28/2018 2:53 AM, 苗海泉 wrote:
>
>> Thanks for your detailed advice, the monitor product you are talking about
>> is good, but our solr system is running on a private
ps://sematext.com/spm/>
>
> HTH,
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 28 Feb 2018, at 02:42, 苗海泉 wrote:
> >
> > Thank you, I r
I encountered a problem, when I was in the process of compiling solr6.0
source error, I have installed the ant and ivy, and then solr6 source code
catalog Executive eclipse ant eclipse would like to generate a project
error as follows "
Buildfile: D: \ solr-6.0.0-src \ solr-6.0.0 \ build.xml
BUILD
nodes?
>
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 27 Feb 2018, at 14:43, 苗海泉 wrote:
> >
> > Thank you, we were 49 shard 49 nodes, but later f
Consulting Support Training - http://sematext.com/
>
>
>
> > On 27 Feb 2018, at 13:22, 苗海泉 wrote:
> >
> > Thanks for you reply again.
> > I just said that you may have some misunderstanding, we have 49 solr
> nodes,
> > each collection has 25 shards, each
odes,
> one node will need to handle double indexing load.
>
> Emir
> --
> Monitoring - Log Management - Alerting - Anomaly Detection
> Solr & Elasticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 27 Feb 2018, at 12:54, 苗海泉 wrote:
>
time, very much hope to give a solution to the problem of ideas,
greatly appreciated
2018-02-27 19:46 GMT+08:00 苗海泉 :
> Thank you for reply.
> One collection has 25 shard one replica, one solr node has about 5T on
> desk.
> GC is checked ,and modify as follow :
> SOLR_JAVA_
sticsearch Consulting Support Training - http://sematext.com/
>
>
>
> > On 27 Feb 2018, at 11:36, 苗海泉 wrote:
> >
> > I encountered a more serious problem in the process of using solr. We use
> > the solr version is 6.0, our daily amount of data is about 500 billion
>
I encountered a more serious problem in the process of using solr. We use
the solr version is 6.0, our daily amount of data is about 500 billion
documents, create a collection every hour, the online collection of more
than a thousand, 49 solr nodes. If the collection in less than 800, the
speed is
I use the solr is 6.0 version, the solrj is 6.0 version, using
SolrCloud mode deployment, in my code did not make an explicit commit,
configure the autoCommit and softAutoCommit, using the
ConcurrentUpdateSolrClient class.
When we send 100 million data, often read timeout exception occurred
in thi
24 matches
Mail list logo