Hi,
I don’t find any documentation about the parameter zookeeper_server_java_heaps
in zoo.cfg.
The way to control java heap size is either the java.env file of the
zookeeper-env.sh file. In zookeeper-env.sh
SERVER_JVMFLAGS="-Xmx=512m"
How many RAM on your server ?
Regards
Dominique
Le lun.
Hi Solr Users,
I have a very big synonym file (>5MB). I am unable to start Solr in cloud
mode as it throws an error message stating that the synonmys file is
too large. I figured out that the zookeeper doesn't take a file greater
than 1MB size.
I tried to break down my synonyms file to smaller ch
Aside that a 5 MB synonym file is rather strange (what is the use case for such
a large synonym file?) and that it will have impact on index size and/or query
time:
You can configure zookeeper server and the Solr client to allow larger files
using the jute.maxbuffer option.
> Am 30.07.2019 um
You have to increase the -Djute.maxbuffer for large configs.
In Solr bin/solr/solr.in.sh use e.g.
SOLR_OPTS="$SOLR_OPTS -Djute.maxbuffer=1000"
This will increase maxbuffer for zookeeper on solr side to 10MB.
In Zookeeper zookeeper/conf/zookeeper-env.sh
SERVER_JVMFLAGS="$SERVER_JVMFLAGS -Djut
Hi,
My environment have 5 servers with solr + zookeeper in the same hosts.
However, I've had 48 RAM each servers (solr - xms 28 gb and xmx - 32 gb).
Looking for my servers and process, in zookeepers has xmx 1000 mb and xmx
4096 mb (last item, was change for me).
Why 2 values for xmx?
Regards,
2 xmx does not make sense,
Your heap seems unusual large usually your heap should be much smaller than
available memory so solr can use it for index caching which is off-heap
> Am 30.07.2019 um 13:25 schrieb Rodrigo Oliveira
> :
>
> Hi,
>
> My environment have 5 servers with solr + zookeeper
Hello all,
We have a 3-node Solr cluster running on google cloud platform. I would
like to schedule a backup and have been trying the backup API and getting
java.nio.file.NoSuchFileException:java.nio.file.NoSuchFileException error.
I suspect it is because a shared drive is necessary. Google VM ins
On 7/30/2019 5:41 AM, Jayadevan Maymala wrote:
We have a 3-node Solr cluster running on google cloud platform. I would
like to schedule a backup and have been trying the backup API and getting
java.nio.file.NoSuchFileException:java.nio.file.NoSuchFileException error.
I suspect it is because a sha
Hi,
Follow the process, and more data about my SOLR + ZOOKEEPER.
root 48425 1 26 Jul29 ?03:00:39 java -server -Xms28g
-Xmx32g -XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90
-XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:ConcGCThreads=4
-XX:ParallelGCThread
On Tue, Jul 30, 2019 at 5:56 PM Shawn Heisey wrote:
> On 7/30/2019 5:41 AM, Jayadevan Maymala wrote:
> > We have a 3-node Solr cluster running on google cloud platform. I would
> > like to schedule a backup and have been trying the backup API and getting
> > java.nio.file.NoSuchFileException:java
Hi All,
I have an author suggester (searchcomponent and the related request
handler) defined in solrconfig:
>
author
AnalyzingInfixLookupFactory
DocumentDictionaryFactory
BOOK_productAuthor
short_text_hu
suggester_infix_author
false
false
On 7/30/2019 7:11 AM, Jayadevan Maymala wrote:
We will need the *FULL* error message. It is probably dozens of lines
long and MIGHT contain multiple "Caused by" sections.
{
"responseHeader":{
"status":500,
"QTime":22},
"Operation backup caused
exception:":"java.nio.file.NoSuchF
Hi Roland,
Could you check Analysis tab (
https://lucene.apache.org/solr/guide/8_1/analysis-screen.html) and tell how
the term is analyzed for both query and index?
Kind Regards,
Furkan KAMACI
On Tue, Jul 30, 2019 at 4:50 PM Szűcs Roland
wrote:
> Hi All,
>
> I have an author suggester (searchc
Hi Vipul,
You are welcome!
Kind Regards,
Furkan KAMACI
On Fri, Jul 26, 2019 at 11:07 AM Vipul Bahuguna <
newthings4learn...@gmail.com> wrote:
> Hi Furkan -
>
> I realized that I was searching incorrectly.
> I later realized that if I need to search by specific field, I need to do
> as you sugge
The FS backup feature requires a shared drive as you say, and this is clearly
documented. No way around it. Cloud Filestore would likely fix it.
Or you could write a new backup repo plugin for backup directly to Google Cloud
Storage?
Jan Høydahl
> 30. jul. 2019 kl. 13:41 skrev Jayadevan Maymal
David,
Firstly, thanks for putting together such a thorough email it helps a lot to
understand some of the things we were just guessing at because (as you
mentioned a few times) the documentation around all of this is rather sparse.
I’ll explain the context around the use case we’re trying to s
Hi Adi,
RAID10 is good for satisfying both indexing and query, striping across
mirror sets. However, you lose half of your raw disk space, just like with
RAID1.
Here is a mail thread of mine which discusses RAID levels for Solr
specific:
https://lists.apache.org/thread.html/462d7467b2f2d064223eb4
Hi Furkan,
Thanks the suggestion, I always forget the most effective debugging tool
the analysis page.
It turned out that "Jó" was a stop word and it was eliminated during the
text analysis. What I will do is to create a new field type but without
stop word removal and I will use it like this:
sh
Hi Furkan,
Thanks for your response !
Indeed RAID10 with both mirroring and striping should satisfy the need, but per
some benchmarks in the network there is still an impact on write performance on
it compared to RAID0 which is considered as much better (attaching a table that
summarizes differ
On 7/30/2019 12:12 PM, Kaminski, Adi wrote:
Indeed RAID10 with both mirroring and striping should satisfy the need,
but per some benchmarks in the network there is still an impact on write
performance on it compared to RAID0 which is considered as much better
(attaching a table that summarizes
I have a SOLR 4.10.2 core, and I am upgrading to 8.1.1.
I created an 8.1.1 core manually using the default_config set , and then
brought over settings into the 8.1.1 schema
I have adjusted the schema.xml and solrconfig.xml, and I have the core
queryable in 8.1.1.
I have a field named Company:
Hi,
We are trying to decide whether we should upgrade to Solr 7.7.2 version or
Solr 8.2.0 version. We are currently on Solr 6.3.0 version.
On one hand 8.2.0 version feels like a good choice because it is the latest
version. But then experience tells that initial versions usually have lot
of bugs
On Tue, Jul 30, 2019 at 4:41 PM Sanders, Marshall (CAI - Atlanta) <
marshall.sande...@coxautoinc.com> wrote:
> I’ll explain the context around the use case we’re trying to solve and
> then attempt to respond as best I can to each of your points. What we have
> is a list of documents that in our c
Hi all,
Thanks for your invaluable and helpful answers.
I currently don't have an external zookeeper loaded. I am working as per
the documentation for solr cloud without external zookeeper. I will later
add the external zookeeper once the changes works as expected.
*1) Will I still need to make
Ad 1) it needs to be configured in Zookeeper server and Solr and all other ZK
clients
Ad 2) you never need to shut it down in production for updating Synonym files.
Use the config set API to reupload the full configuration included updated
synonyms:
https://lucene.apache.org/solr/guide/7_4/confi
The idea of using an external program could be good.
> Am 31.07.2019 um 08:06 schrieb Salmaan Rashid Syed
> :
>
> Hi all,
>
> Thanks for your invaluable and helpful answers.
>
> I currently don't have an external zookeeper loaded. I am working as per
> the documentation for solr cloud without
Hi,
Regarding the issue, I have tried to put the following in zoo.cfg under
ZooKeeper:
4lw.commands.whitelist=mntr,conf,ruok
But it is still showing this error.
*"Errors: - membership: Check 4lq.commands.whitelist setting in zookeeper
configuration file."*
As I am using SolrCloud, the collection
27 matches
Mail list logo