Hello
I'm using Zookeeper 3.4.6
The ZK log data folder keeps growing with transaction logs files (log.*).
I set the following in zoo.cfg:
autopurge.purgeInterval=1
autopurge.snapRetainCount=3
dataDir=..\\data
Per ZK log, it reads those parameters:
2017-07-09 17:44:59,792 [myid:] - INFO [main:
Hi Everyone,
We are running solr 6.4.1 in cloud mode on CentOS production server.
Currently, we are using the embedded zookeeper. It is a simple set up with
one collection and one shard.
By default, Jetty server binds to all interfaces which is not safe so we
have changed the bin/solr script. We
You can try run purge manually see if it is working:
org.apache.zookeeper.server.PurgeTxnLog.
And use a cron job to do clean up.
On 7/9/17, 11:07 AM, "Avi Steiner" wrote:
Hello
I'm using Zookeeper 3.4.6
The ZK log data folder keeps growing with transaction logs files (lo
Hi,
I have personally written a Python script to parse RDF files into an in-memory
graph structure and then pull data from that structure to index to Solr.
I.e. you may perfectly well have RDF (nt, turtle, whatever) as source but index
sub structures in very specific ways.
Anyway, as Erick point
Hello all,
I have to index (and search) data organised as followed: many files on the
filesystem and each file has extra metadata stored on a DB (the DB table has a
reference to the file path).
I think I should have 1 Solr document per file with fields coming from both the
DB (through DIH) and
4. Write an external program that fetches the file, fetches the metadata,
combines them, and send them to Solr.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jul 9, 2017, at 3:03 PM, Giovanni De Stefano wrote:
>
> Hello all,
>
> I have to index
Jan
I hope this is not off-topic, but I am curious: if you do not use the
three fields, subject, predicate, and object for indexing RDF
then what is your algorithm? Maybe document nesting is appropriate for
this? cheers -- Rick
On 2017-07-09 05:52 PM, Jan Høydahl wrote:
Hi,
I have personal
Did another round of testing, the tlog on target cluster is cleaned up once the
hard commit is triggered. However, on source cluster, the tlog files stay there
and never gets cleaned up.
Not sure if there is any command to run manually to trigger the
updateLogSynchronizer. The updateLogSynchron
I have found that it could be likely due to the hashJoin in the streaming
expression, as this will store all tuples in memory?
I have more than 12 million in the collections which I am querying, in 1
shard. The index size of the collection is 45 GB.
Physical RAM of server: 384 GB
Java Heap: 22 GB
Thanks for info, Sean
Can I do it in Windows server?
-Original Message-
From: Xie, Sean [mailto:sean@finra.org]
Sent: Sunday, July 9, 2017 7:33 PM
To: solr-user@lucene.apache.org
Subject: Re: ZooKeeper transaction logs
You can try run purge manually see if it is working:
org.apache.z
Thanks for the reference, I am guessing this feature is not available through
the post utility inside solr/bin
Regards,
Imran
Sent from Mail for Windows 10
From: Jan Høydahl
Sent: Friday, July 7, 2017 1:51 AM
To: solr-user@lucene.apache.org
Subject: Re: help on implicit routing
http://lucene.a
11 matches
Mail list logo