quot;:"on",
"wt":"json"}},
"response":{"numFound":1,"start":0,"maxScore":1.0,"docs":[
{
"id":"1",
"fullname_s":"john smith",
"_version_":15694460
Erik,
Thank you for correcting. Things I miss out on daily bases: _text_ :)
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Tue, Jun 6, 2017 at 5:12 PM, Nick Way
wrote
Running "ant eclipse" or "ant test" in verbose mode will provide you the
exact lib in ivy2 cache which is corrupt. Delete that particular lib and
run "ant" again. Also don't try to get out / exit "ant" commands via
Ctrl+C or Ctrl+V while it is downloading the libraries to ivy2 folder.
Damien,
then I poll with REQUESTSTATUS
REQUESTSTATUS is an API which provided you the status of the any API
(including other heavy duty apis like SPLITSHARD or CREATECOLLECTION)
associated with async_id at that current timestamp / moment. Does that give
you "state"="completed&
Javed,
Can you let us know if you are running in standalone or cloud mode?
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Mon, Jul 17, 2017 at 11:54 AM, javeed wrote
serialize " + o.getClass()
+ "; try implementing ObjectResolver?");
}
};
While UUID implements serializable, so should be BytesRef instance to?? ::
public final class UUID implements java.io.Serializable, Comparable
Can you share the payload with you are trying to update?
Amrit Sarkar
S
= client.query(params);
Setting key and value via SolrParams is available.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Mon, Jul 17, 2017 at 8:48 PM, Ponnuswamy, Poornima (
ad?
In my opinion, you should use the better feature. Though you may hit some
limitations of json faceting and their respective would be jiras opened too.
Rest Mr. Seeley would be the the best person the 2nd.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Tw
along with it.
I am not sure whether this can be done with current code or it will be
fixed / improved in the future.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Mon, Jul 17
stics+Reference
for more details.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Fri, Jul 7, 2017 at 4:15 PM, Antonio De Miguel
wrote:
> Hi,
>
> I'm taking a lo
of servers.
Hope this helps.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Mon, Jul 17, 2017 at 11:38 PM, S G wrote:
> Hi,
>
> Does anyone know if CloudSolrClien
Sujay,
Lucene index is in flat-object document style, so I really not think nested
documents at index / storage will ever be supported unless someone change
the very intricacy of the index.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com
By saying:
I am just adding multiValued=false in the managed-schema file.
Are you modifying in the local filesystem "conf" or going into the core
conf directory and changing there? If you are SolrCloud, you should change
the same on Zookeeper.
nodes, the leaders of shard will try to create the same
COLLECTIONCHECKPOINT, which may or may not be successful.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Fri, Jul 21
Hendrik,
Can you list down the error snippet so that we can refer the code where
exactly that is happening.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Fri, Jul 21
, map);
> document.addField("_version_",
> response.getResults().get(0).get("_version_"));
> docs.add(document);
> updateRequest = new UpdateRequest();
> updateRequest.add(docs);
> client.request(updateRequest, collection);
> updateRequest =
Zheng,
You may want to check https://issues.apache.org/jira/browse/SOLR-7452. I
don't know whether they are absolutely related but I am sure I have seen
complaints and enquiries regarding not precise statistics with JSON Facets.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589
Hi,
I didn't had a chance to go through the steps you are doing, but I followed
the one written by Varun Thacker via influxdb:
https://github.com/vthacker/solr-metrics-influxdb, and it works fine. Maybe
it can be of some help.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589
https://www.youtube.com/watch?v=tv5qKDKW8kk
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Wed, Aug 9, 2017 at 7:45 PM, sasarun wrote:
> Hi All,
>
> I found quite a few disc
Pretty much what Webster and Erick mentioned, else please try the pdf I
attached. I followed the official documentation doing that.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in
Hi Markus,
Emir already mentioned tuning *reclaimDeletesWeight which *affects segments
about to merge priority. Optimising index time by time, preferably
scheduling weekly / fortnight / ..., at low traffic period to never be in
such odd position of 80% deleted docs in total index.
Amrit Sarkar
Gunalan,
Zookeeper throws KeeperException at /overseer for most of the solr issues,
namely indexing. Sync the timestamp of zookeeper error with solr log; the
problem lies there most probably.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com
andler),
depends on the file format, its csv, xml, json, but mind it is single
threaded.
Hope this clarifies some of it.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Fri, Oc
= (HttpURLConnection) u.openConnection();
Can you check at your webpage level headers are properly set and it
has key "content-type".
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/i
Strange,
Can you add: "text/html;charset=utf-8". This is wiki.apache.org page's
Content-Type. Let's see what it says now.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/
now this is not the
issue.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Fri, Oct 13, 2017 at 7:04 PM, Amrit Sarkar
wrote:
> Kevin,
>
> Just put "html" too a
pendocument.text");
mimeMap.put("ott", "application/vnd.oasis.opendocument.text");
mimeMap.put("odp", "application/vnd.oasis.opendocument.presentation");
mimeMap.put("otp", "application/vnd.oasis.opendocument.presentation");
mimeMap.put("ods&
;
String type = rawContentType.split(";")[0];
if(typeSupported(type) || "*".equals(fileTypes)) {
String encoding = conn.getContentEncoding();
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedI
Hi Kevin,
Can you post the solr log in the mail thread. I don't think it handled the
.md by itself by first glance at code.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit
ah oh, dockers. They are placed under [solr-home]/server/log/solr/log in
the machine. I haven't played much with docker, any way you can get that
file from that location.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
Lin
pardon: [solr-home]/server/log/solr.log
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Fri, Oct 13, 2017 at 8:10 PM, Amrit Sarkar
wrote:
> ah oh, dockers. They are pla
"text/html", try with both.
If you get past this hurdle this hurdle, let me know.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Fri, Oct 13, 2017 at 8:22 PM, Kevin La
ent-not-allowed-in-prolog-error
https://stackoverflow.com/questions/3030903/content-is-not-allowed-in-prolog-when-parsing-perfectly-valid-xml-on-gae
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com
Hi,
If you wish the emails to "stop", kindly "UNSUBSCRIBE" by following the
instructions on the http://lucene.apache.org/solr/community.html. Hope this
helps.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidw
Hi James,
As for each update you are doing via atomic operation contains the "id" /
"uniqueKey". Comparing the "_version_" field value for one of them would be
fine for a batch. Rest, Emir has list them out.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9
ow new SolrException(ErrorCode.SERVER_ERROR, msg);
}
Not sure the reason behind; someone else can weigh in here, but PointFields
are not allowed to be unique keys, probably because how they are structures
and stored on disk.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitte
James,
@Amrit: Are you saying that the _version_ field should not change when
> performing an atomic update operation?
It should change. a new version will be allotted to the document. I am not
that sure about in-place updates, probably a test run will verify that.
Amrit Sarkar
Search Engin
ing observation, Nawab, with ramBufferSizeMB=20G, you are getting
20GB segments on 6.5 or less? a GB?
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Tue, Oct 17, 2017 at 12:48 PM,
https://issues.apache.org/jira/browse/SOLR-10829: IndexSchema should
enforce that uniqueKey field must not be points based
The description tells the real reason.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https
Chandru,
Didn't try the above config bu whyt have you defined both "mergePolicy" and
"mergePolicyFactory"? and pass different values for same parameters?
> 10
> 1
>
>
> 10
> 10
>
>
Amrit Sarkar
Search Engineer
Lucidworks,
tokenStream, has analysed tokens: "vacat" which obviously doesn't
match with extracted term.
Why the df, qf, values concern with what we pass in "hl.fl"? Isn't the
query which is to be highlighted be analysed by field passed in "hl.fl",
but then multiple fields can be passed in "hl.fl". Just wondering how it is
suppose to be done. Any explanation will be fine.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Following Pratik's spot-on comment and not really related to your question,
Even the "partitionKeys" parameter needs to be specified the "over" field
while using "parallel" streaming.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidw
the commit strategy in indexing. With
auto-commit so highly set, are you committing after batch, if yes, what's
the number.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
ues in same shard. Can you share
what is the architecture of the setup?
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit2
On Tue, Nov 7, 201
Maybe not a relevant fact on this, but: "addAndDelete" is triggered by
"*Reordering
of DBQs'; *that means there are non-executed DBQs present in the updateLog
and an add operation is also received. Solr makes sure DBQs are executed
first and than add operation is executed.
ng expression:
expr=rollup(
>
> search(collection1,
>
> zkHost="localhost:9983",
>
> qt="/export",
>
> q="*:*",
>
> fq=a_s:filter_a
>
> fl="id,a_s,a_i,a_f",
>
>
ld from each document to identify a shard where the document belongs. If
> the field specified is missing in the document, however, the document will
> be rejected. You could also use the _route_ parameter to name a specific
> shard.
Amrit Sarkar
Search Engineer
Lucidworks, Inc
Hi Martin,
I tested the same application SolrJ code on my system, it worked just fine
on Solr 6.6.x. My Solrclient is "CloudSolrJClient", which I think doesn't
make any difference. Can you show the response and field declarations if
you are continuously facing the issue.
Amri
a probable reason b/w mismatch between Mbeans stats and manual
counting in logs, as not everything gets logged. Need to check that once.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in
tring". *You need to write analyzer chain for
the same fieldType and don't include:*
LowerCaseFilterFactory is responsible lowercase the token coming in query
and while indexing.
Something like this will work for you:
I listed "KeywordTokenizerFactory" considering this
ld. The KeywordTokenizerFactory doesn't split the incoming test up
> _at all_. So if the input is
> "my dog has fleas" you can't search for just "dog" unless you use the
> extremely inefficient *dog* form. If you want to search for words, use
> an tokenizer t
arameters as "route.field" is
collection-specific property maintained at zookeeper (state.json /
clusterstate.json).
https://lucene.apache.org/solr/guide/6_6/collections-api.html#CollectionsAPI-create
I highly recommend not to alter core.properties manually when dealing with
SolrCloud a
Kenny,
This is a known behavior in multi-sharded collection where the field values
belonging to same facet doesn't reside in same shard. Yonik Seeley has
improved the Json Facet feature by introducing "overrequest" and "refine"
parameters.
Kindly checkout Jira:
https://issues.apache.org/jira/brow
split a
shard, which will divide the index and hence the hash range. I will
strongly recommend you to reconsider your SolrCloud design technique for
your use-case.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https:
A little more information would be beneficial;
COLO1 and COLO2 are collections? if yes, both have same configurations and
you are positively issuing deletes to the IDs already present in index etc.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http
Hi Venkat,
FYI: Index time boosting has been deprecated from latest versions of Solr:
https://issues.apache.org/jira/browse/LUCENE-6819.
Not sure which version you are on, but best consider the comments on the
JIRA before using it.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
Sundeep,
You would like to explore
http://lucene.apache.org/solr/6_6_1/solr-core/org/apache/solr/analysis/ReversedWildcardFilterFactory.html
here probably.
Thanks
Amrit Sarkar
On 18 Nov 2017 6:06 a.m., "Sundeep T" wrote:
> Hi,
>
> We have several indexed string fields whi
entire index of
Leader unless the difference in versions in docs are more than
"numRecordsToKeep", which is default 100, unless you have modified in
solrconfig.xml.
Looking forward to your analysis.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Tw
after the bootstrapping is done.
Reloading makes the core opening a new searcher. While explicit commit is
issued at target leader after the BS is done, follower are left unattended
though the docs are copied over.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twi
Tom,
(and take care not to restart the leader node otherwise it will replicate
> from one of the replicas which is missing the index).
How is this possible? Ok I will look more into it. Appreciate if someone
else also chimes in if they have similar issue.
Amrit Sarkar
Search Engineer
Lucidwo
Tom,
Thank you for trying out bunch of things with CDCR setup. I am successfully
able to replicate the exact issue on my setup, this is a problem.
I have opened a JIRA for the same:
https://issues.apache.org/jira/browse/SOLR-11724. Feel free to add any
relevant details as you like.
Amrit Sarkar
% of the total heap memory allocated (16GB).
Looking forward to positive responses.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit2
Emir,
Solr version: 6.6, SolrCloud
We followed the instructions on README.md on the github project.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com
embedded ZK?
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Following up Erick's response,
This particular article will help setting up Setting up Solr Cloud 6.3.0
with Zookeeper 3.4.6
<https://medium.com/@sarkaramrit2/setting-up-solr-cloud-6-3-0-with-zookeeper-3-4-6-867b96ec4272>
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
4
983 (leader)
| |-- server4:8983
|
--- shard4 - server1:7574 (leader)
| |-- server4:7574
|
--- shard5 - server3:7574 (leader)
|-- server5:8983
Hope this helps.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
4
-- node'1' (leader) & node'2' (replica)
after splitshard;
shardA --- node'1' (leader) & node'2' (replica) (INACTIVE)
shardA_0 -- node'1' & node'2' (ACTIVE)
shardA_1 -- node'1' & node'2' (AC
Just gathering more information on this Solr-JDBC;
Is it a open source plugin provided on https://github.com/shopping24/ and
not part of actual project *lucene-solr* project?
https://github.com/shopping24/solr-jdbc-synonyms
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
I am facing kinda similar issue lately where full-import is taking seconds
while delta-import is taking hours.
Can you share some more metrics/numbers related to full-import and
delta-import requested, rows fetched and time?
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
pushed
to the leader of that shard.
The confluence link provides the insights in much detail:
https://lucidworks.com/2013/06/13/solr-cloud-document-routing/
Another useful link:
https://lucidworks.com/2013/06/13/solr-cloud-document-routing/
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
Sorry, The confluence link:
https://cwiki.apache.org/confluence/display/solr/Shards+and+Indexing+Data+in+SolrCloud
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
On Thu, Jun 1
.
Ideally then, a full-import or the delta-import should take similar time to
build the docs (fetch next row). I may very well be going entirely wrong
here.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https
Hi,
Yeah if you look above I have stated the same jira. I see your question on
3DCs with Active-Active scenario, will respond there.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in
cluster
server logs?
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit2
On Fri, Aug 17, 2018 at 11:49 PM cdatta wrote:
> Any pointer wo
Basic Authentication in clusters is not supported as of today in CDCR.
On Fri, 7 Sep 2018, 4:53 pm Mrityunjaya Pathak,
wrote:
> I have setup two solr cloud instances in two different Datacenters Target
> solr cloud machine is copy of source machine with basicAuth enabled on
> them. I am unable t
Yeah, I am not sure about how the Authentication band aid feature will
work, the mentioned stackoverflow link. It is about time we include basic
authentication support in CDCR.
On Thu, 6 Sep 2018, 8:41 pm cdatta, wrote:
> Hi Amrit, Thanks for your response.
>
> We wiped out our complete installa
te of Technology Jaipur, India
Apologies in advance and kindly ignore if this doesn't concern you.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://
ne.apache.org/solr/7_5_0//solr-core/org/apache/solr/update/processor/AtomicUpdateProcessorFactory.html>
are broken and I am working on fixing it.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in
osts.get(state.getZkHost()) == null) {
> hosts.add(state.getZkHost(), new NamedList());
> }
> ((NamedList) hosts.get(state.getZkHost())).add(state.getTargetCollection(),
> queueStats);
> }
> rsp.add(CdcrParams.QUEUES, hosts);
>
>
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9
Hi Arnold,
You need "cdcr-processor-chain" definitions in solrconfig.xml on both
clusters' collections. Both clusters need to act as source and target.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
re memory? Not
sure, someone else can weigh in.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit2
On Mon, Feb 26, 2018 at 7:37 PM, Vi
Nice. Can you please post the details on the JIRA too if possible:
https://issues.apache.org/jira/browse/SOLR-11959 and we can probably put up
a small patch of adding this bit of information in official documentation.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
le if it is possible for you to apply the patch, build the jar and
try it out, please do and let us know.
For, *SOLR-9394* <https://issues.apache.org/jira/browse/SOLR-9394>, if you
can comment on the JIRA and post the sample docs, solr logs, relevant
information, I can give it a thoro
a very dirty patch which fixes the problem with basic tests to
prove it works. I will try to polish and finish this as soon as possible.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkar
Hi Chris,
Sorry I was off work for few days and didn't follow the conversation. The
link is directing me to
https://issues.apache.org/jira/projects/SOLR/issues/SOLR-12063. I think we
have fixed the issue stated by you in the jira, though the symptoms were
different than yours.
Amrit S
at updates are not actually batched in transit from the
> source to the target and instead each document is posted separately?
The batchsize and schedule regulate how many docs are sent across target.
This has more details:
https://lucene.apache.org/solr/guide/7_2/cdcr-config.html#the-replicator-el
Susheel,
That is the correct behavior, "commit" operation is not propagated to
target and the documents will be visible in the target as per commit
strategy devised there.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidwork
Elaino,
When you say commits not working, the solr logs not printing "commit"
messages? or documents are not appearing when we search.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linke
Hi Susheel,
Pretty sure you are talking about this:
https://issues.apache.org/jira/browse/SOLR-11724
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com
Chris,
After disabling the buffer on source, kind shut down all the nodes of
source cluster first and then start them again. The tlogs will be removed
accordingly. BTW CDCR doesn't abide by 100 numRecordsToKeep or 10 numTlogs.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589
Susheel,
At the time of core reload, logs must be complaining or atleast pointing to
some direction. Each leader of shard is responsible to spawn a threadpool
for cdcr replicator to get the data over.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http
Chris,
Try to index few dummy documents and analyse if the tlogs are getting
cleared or not. Ideally on the restart, it clears everything and keeps max
2 tlog per data folder.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
pull type replicas can be designed
better, apart from that, if this is urgent need for you, please apply the
patches for your packages and probably give a shot. I will added extensive
tests for both the use-cases.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter
Pardon, * I have added extensive tests for both the use-cases.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit2
On Thu, Apr 26, 2018 at 3
Hi Rajeswari,
No it is not. Source forwards the update to the Target in classic manner.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit2
Brian,
If you are still facing the issue after disabling buffer, kindly shut down
all the nodes at source and then start them again, stale tlogs will start
purging themselves.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
Jay,
Can you sample delete command you are firing at the source to understand
the issue with Cdcr.
On Tue, 3 Jul 2018, 4:22 am Jay Potharaju, wrote:
> Hi
> The current cdcr setup does not work if my collection uses implicit
> routing.
> In my testing i found that adding documents works without
. SSL and Kerberized cluster will have the
payload/updates encrypted. Thank you for pointing it out.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit
Flow (27 July).The
Fifth Elephant: 26 and 27 July Registration link with 10% discount on
conference: https://fifthelephant.in/2018/?code=SG65IC
<https://fifthelephant.in/2018/?code=SG65IC>For more details about any of
these, write to i...@hasgeek.com or call 7676332020.*
Amrit Sarkar
Search
was started first and then target. You need
to shut down all the nodes both at source and target. Get the targe nodes
up, all of them before starting the source ones. Logs will be initialized
positively.
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http:/
1 - 100 of 109 matches
Mail list logo