issues/2571
>
> The Nginx server closed the connection. Any info in nginx log ?
>
> Dominique
>
> Le lun. 17 août 2020 à 17:33, Odysci a écrit :
>
>> Hi,
>> thanks for the reply.
>> We're using solr 8.3.1, ZK 3.5.6
>> The stacktrace is below.
>>
ace generated by SolrJ
> - any concomitant and relevant information in solr nodes logs or zk logs
>
> Just to know, why not use a load balanced LBHttp... Solr Client ?
>
> Regards.
>
> Dominique
>
>
> Le lun. 17 août 2020 à 00:41, Odysci a écrit :
>
> > Hi,
&g
Hi,
We have a solrcloud setup with 2 solr nodes and 3 ZK instances. Until
recently I had my application server always call one of the solr nodes (via
solrJ), and it worked just fine.
In order to improve reliability I put an Nginx reverse-proxy load balancer
between my application server and the s
eros, have
> clients not having any admin access on Solr (minimum privileges only!), use
> Solr whitelists to enable only clients that should access Solr, enable Java
> security manager (* to make it work with Kerberos auth you need for it to
> wait for a newer Solr version).
>
> &
Folks,
I suspect one of our Zookeeper installations on AWS was subject to a Meow
attack (
https://arstechnica.com/information-technology/2020/07/more-than-1000-databases-have-been-nuked-by-mystery-meow-attack/
)
Basically, the configuration for one of our collections disappeared from
the Zookeepe
to process all those docs on the client if you’re
> doing some kind of analytics.
>
> 2> if you really, truly need all 300K docs, try getting them in chunks
> using CursorMark.
>
> Best,
> Erick
>
> > On Jul 13, 2020, at 10:03 PM, Odysci wrote:
> >
&g
12, 2020 at 1:02 AM Shawn Heisey wrote:
> On 6/25/2020 2:08 PM, Odysci wrote:
> > I have a solrcloud setup with 12GB heap and I've been trying to optimize
> it
> > to avoid OOM errors. My index has about 30million docs and about 80GB
> > total, 2 shards, 2 repl
Hi,
Just summarizing:
I've experimented using different sized of filtercache and documentcache,
after removing any maxRamMB. Now the heap seems to behave as expected,
that is, it grows, then GC (not full one) kicks in multiple times and keep
the used heap under control. eventually full GC may kic
Thanks.
The heapdump indicated that most of the space was occupied by the caches
(filter and documentCache in my case).
I followed your suggestion of removing the limit on maxRAMMB on filterCache
and documentCache and decreasing the number of entries allowed.
It did have a significant impact on the
hare your cache settings?
>
> On the other hand, did you check here:
> https://cwiki.apache.org/confluence/display/SOLR/SolrPerformanceProblems
>
> Kind Regards,
> Furkan KAMACI
>
> On Thu, Jun 25, 2020 at 11:09 PM Odysci wrote:
>
> > Hi,
> >
> > I have a
Hi,
I have a solrcloud setup with 12GB heap and I've been trying to optimize it
to avoid OOM errors. My index has about 30million docs and about 80GB
total, 2 shards, 2 replicas.
In my testing setup I submit multiple queries to solr (same node),
sequentially, and with no overlap between the docum
/sematext.com/
>
>
>
> > On 24 Jun 2020, at 16:38, Odysci wrote:
> >
> > Hi,
> >
> > I have a Solrcloud configuration with 2 nodes and 2 shards/2 replicas.
> > I configure the sizes of the solr caches on solrconfig.xml, which I
> > believe apply to
Hi,
I have a Solrcloud configuration with 2 nodes and 2 shards/2 replicas.
I configure the sizes of the solr caches on solrconfig.xml, which I
believe apply to nodes.
But when I look at the caches in the Solr UI, they are shown per core
(e.g., shard1_replica_N1). Are the cache sizes defined in th
ed, the data couldn’t be removed.
>
> Best,
> Erick
>
> > On Jun 23, 2020, at 12:58 PM, Odysci wrote:
> >
> > Hi,
> > I've got a solrcloud configuration with 2 shards and 2 replicas each.
> > For some unknown reason, one of the replicas was on "re
Hi,
I've got a solrcloud configuration with 2 shards and 2 replicas each.
For some unknown reason, one of the replicas was on "recovery" mode
forever, so I decided to create another replica, which went fine.
Then I proceeded to delete the old replica (using the SOlr UI). After a
while the interface
t; including dealing with any changes to the topology, i.e. nodes
> stopping/starting, replicas going into recovery, new collections being
> added, etc.
>
> HTH,
> Erick
>
> > On Jun 4, 2020, at 7:11 PM, Odysci wrote:
> >
> > Erick,
> > thanks for the reply.
>
ter for all requests.
>
> Best,
> Erick
>
>
>
> > On May 27, 2020, at 11:12 AM, Odysci wrote:
> >
> > Hi,
> >
> > I have a question regarding solrcloud searches on both replicas of an
> index.
> > I have a solrcloud setup with 2 physi
Hi,
I'm looking for some advice on improving performance of our solr setup. In
particular, about the trade-offs between applying larger machines, vs more
smaller machines. Our full index has just over 100 million docs, and we do
almost all searches using fq's (with q=*:*) and facets. We are using s
Hi,
I have a question regarding solrcloud searches on both replicas of an index.
I have a solrcloud setup with 2 physical machines (let's call them A and
B), and my index is divided into 2 shards, and 2 replicas, such that each
machine has a full copy of the index. My Zookeeper setup uses 3 instan
ld be indexed for better performance ?
>
> stored="false" required="false" multiValued="false"
> docValues="true" />
>
> Sylvain
>
> Le sam. 18 avr. 2020 à 18:46, Odysci a écrit :
>
> > Hi,
> >
> > W
Hi,
We are seeing significant performance degradation on single queries that
use fq with multiple values as in:
fq=field1_name:(V1 V2 V3 ...)
If we use only one value in the fq (say only V1) we get Qtime = T ms
As we increase the number of values, say to 5 values, Qtime more than
triples, even i
could have a major impact. If
> FastVectorHighlighter is not used, the highlighter has
> to re-analyze the text in order to highlight, and if you’re
> highlighting in large text fields that can be very expensive.
>
> Norms, aren’t relevant there….
>
> So let’s see the full hi
I'm using solr-8.3.1 on a solrcloud set up with 2 solr nodes and 2 ZK nodes.
I was experiencing very slow search-with-highlighting on a index that had
'omitNorms="true"' on all fields.
At the suggestion of a stackoverflow post, I changed all fields to be
'omitNorms="false"' and the search-with-high
ev/lucene/lucene-solr-8.3.1-RC2-reva3d456fba2cd1b9892defbcf46a0eb4d4bb4d01f/solr/Re-index>
> on it, and see if you still have issues.
>
> On Sun, 1 Dec 2019 at 17:35, Odysci wrote:
>
> > Hi,
> > I have a solr cloud setup using solr 8.3 and zookeeper, which I recently
> > conver
Hi,
I have a solr cloud setup using solr 8.3 and zookeeper, which I recently
converted from solr 7.7. I converted the index using the index updater and
it all went fine. My index has about 40 million docs.
I used a separate program to check the values of all fields in the solr
docs, for consistency
> On 11/28/2019 9:30 AM, Odysci wrote:
> > No, I did nothing specific to Jetty. Should I?
>
> The http/2 Solr client uses a different http client than the previous
> ones do. It uses the client from Jetty, while the previous clients use
> the one from Apache.
>
> Achievi
No, I did nothing specific to Jetty. Should I?
Thx
On Wed, Nov 27, 2019 at 6:54 PM Houston Putman
wrote:
> Are you overriding the Jetty version in your application using SolrJ?
>
> On Wed, Nov 27, 2019 at 4:00 PM Odysci wrote:
>
> > Hi,
> > I have a solr cloud setup u
I'm using OpenJDK 11
On Wed, Nov 27, 2019 at 7:12 PM Jörn Franke wrote:
> Which jdk version? In this Setting i would recommend JDK11.
>
> > Am 27.11.2019 um 22:00 schrieb Odysci :
> >
> > Hi,
> > I have a solr cloud setup using solr 8.3 and SolrJ
Hi,
I have a solr cloud setup using solr 8.3 and SolrJj, which works fine using
the HttpSolrClient as well as the CloudSolrClient. I use 2 solr nodes with
3 Zookeeper nodes.
Recently I configured my machines to handle ssl, http/2 and then I tried
using in my java code the Http2SolrClient supported
29 matches
Mail list logo