Re: s3 or other cloud hosted storage options?

2020-11-10 Thread Edward Ribeiro
Not yet. People at Salesforce are working on shared blob storage for Solr, but afaik they are redesigning the approach taken, that is, still under active development and not production ready. See the talks below: https://www.youtube.com/watch?v=6fE5KvOfb6A https://www.youtube.com/watch?v=UeTFpNeJ

Re: Vector Scoring Plugin for Solr : Dot Product and Cosine Similarity

2020-06-20 Thread Edward Ribeiro
Hi Vincenzo, The vector search support in Solr is a work-in-progress with a lot of discussions scattered among some JIRA issues. Start here: https://issues.apache.org/jira/plugins/servlet/mobile#issue /SOLR-12890

Re: Tuning for 500+ field schemas

2020-03-18 Thread Edward Ribeiro
What are your hard and soft commit settings? This can have a large impact on the writing throughput. Best, Edward On Wed, Mar 18, 2020 at 11:43 AM Tim Robertson wrote: > > Thank you Erick > > I should have been clearer that this is a bulk load job into a write-only > cluster (until loaded when i

Re: Metadata info on Stored Fields

2020-02-17 Thread Edward Ribeiro
ow what, I think I missed a major description in my earlier email. I > want to be able to return additional data from stored fields alongside the > snippets during highlighting. In this case, the filename where this snippet > came from. Not sure your approach would address that. > > O

Re: Metadata info on Stored Fields

2020-02-17 Thread Edward Ribeiro
Hi, You may try to create two kinds of docs forming a parent-child relationship without nesting. Like 894 parent ... 3213 child 894 xxx portion of file 1 remaining portion of file 1 ... Then you can add metadata for each child doc. The search can be done on child docs but if you need to

Re: Solr 8.2 replicas use only 1 CPU at 100% every solr.autoCommit.maxTime minutes

2020-02-11 Thread Edward Ribeiro
Is your autoCommit configured to open new searchers? Did you try to set openSearcher to false? Edward On Tue, Feb 11, 2020 at 3:40 PM Vangelis Katsikaros wrote: > Hi > > On Mon, Feb 10, 2020 at 5:05 PM Vangelis Katsikaros > > wrote: > > > Hi all > > > > We run Solr 8.2.0 > > * with Amazon Corr

Re: Support Tesseract in Apache Solr

2020-02-11 Thread Edward Ribeiro
I second Jorn: don't deploy Tesseract + Tika on the same server as Solr. Tesseract, specially with OCR enabled, will drain your machine resources that could be used to indexing/searching. In addition to that, any malformed PDF could potentially shutdown the Solr server. Best bet would be to use tik

Re: JSON from Term Vectors Component

2020-02-06 Thread Edward Ribeiro
Python's json lib will convert text as '{"id": 1, "id": 2}' to a dict, that doesn't allow duplicate keys. The solution in this case is to inject your own parsing logic as explained here: https://stackoverflow.com/questions/29321677/python-json-parser-allow-duplicate-keys One possible solution (bel

Re: Filtered join in Solr?

2020-02-04 Thread Edward Ribeiro
Just for the sake of an imagined scenario, you could use the [subquery] doc transformer. A query like the one below: /select?q=family: Smith&fq=watched_movies:[* TO *]&fl=*, movies:[subquery]&movies.q={!terms f=id v=$row.watched_movies} Would bring back the results below: { "responseHeader":{

Re: Apache Solr HTTP health endpoint for blackbox_exporter probings

2020-01-30 Thread Edward Ribeiro
The healthcheck Jan showed is only available in SolrCloud mode. Edward On Thu, Jan 30, 2020 at 2:03 PM Daniel Trüssel wrote: > On 23.01.20 11:55, Jan Høydahl wrote: > > http://localhost:8983/solr/admin/info/health > > On our VMs this endpoint not exists. > > How to enabled this? > > kind regard

Re: Replica type affinity

2020-01-30 Thread Edward Ribeiro
Hi Karl, During collection creation you can specify the `createNodeSet` parameter as specified by the Solr Reference Guide snippet below: "createNodeSet Allows defining the nodes to spread the new collection across. The format is a comma-separated list of node_names, such as localhost:8983_solr,l

Re: How expensive is core loading?

2020-01-29 Thread Edward Ribeiro
Hi, Luke was an standalone app and now is a Lucene module. Read here: https://github.com/DmitryKey/luke You don't need Solr to use it (LukeRequestHandler is a plus). Best, Edward Em qua, 29 de jan de 2020 20:35, Rahul Goswami escreveu: > Thanks for your response Walter. But I could not find

Re: Easiest way to export the entire index

2020-01-29 Thread Edward Ribeiro
HI Amanda, Below is crude prototype in Bash that fetches documents from Solr using cursorMark: https://gist.github.com/eribeiro/de1588aaa1759c02ea40cc281e8aedc8 This is a crude prototype, but should shed some light for your use case (I copied the code below too): Best, Edward --

Re: BooleanQueryBuilder is not adding parenthesis around the query

2020-01-22 Thread Edward Ribeiro
ooking for :) > > > > On Wed, Jan 22, 2020 at 1:08 PM Edward Ribeiro > > > wrote: > > > >> If you are using Lucene's BooleanQueryBuilder then you need to do > nesting > >> of your queries (roughly, a builder for each query enclosed in > &g

Re: BooleanQueryBuilder is not adding parenthesis around the query

2020-01-22 Thread Edward Ribeiro
atch text:child from the result set); * = Lucene book, Solr book and Relevant Search book are excellent resources! Edward Em qua, 22 de jan de 2020 15:07, Edward Ribeiro escreveu: > If you are using Lucene's BooleanQueryBuilder then you need to do nesting > of your queries (roughly,

Re: BooleanQueryBuilder is not adding parenthesis around the query

2020-01-22 Thread Edward Ribeiro
If you are using Lucene's BooleanQueryBuilder then you need to do nesting of your queries (roughly, a builder for each query enclosed in "parenthesis"). A query like (text:child AND text:toys) OR age:12 would be: Query query1 = new TermQuery(new Term("text", "toys")); Query query2 = new TermQuery

Re: Lucene query to Solr query

2020-01-22 Thread Edward Ribeiro
equivalent to "+(topics:29)^2 (topics:38)^3 +(-id:41135)", I mean. :) Edward On Wed, Jan 22, 2020 at 1:51 PM Edward Ribeiro wrote: > Hi, > > A more or less equivalent query (using Solr's LuceneQParser) to > "topics:29^2 AND (-id:41135) topics:38^3" wou

Re: Lucene query to Solr query

2020-01-22 Thread Edward Ribeiro
Hi, A more or less equivalent query (using Solr's LuceneQParser) to "topics:29^2 AND (-id:41135) topics:38^3" would be: topics:29^2 AND (-id:41135) topics:38^3 Edward On Mon, Jan 20, 2020 at 1:10 AM Arnold Bronley wrote: > Hi, > > I have a Lucene query as following (toString represenation of

Re: Is it possible to add stemming in a text_exact field

2020-01-22 Thread Edward Ribeiro
Hi, One possible solution would be to create a second field (e.g., text_general) that uses DefaultTokenizer, or other tokenizer that breaks the string into tokens, and use a copyField to copy the content from text_exact to text_general. Then, you can use edismax parser to search both fields, but g

Re: Failed to connect to server

2020-01-17 Thread Edward Ribeiro
> I have increased the number of maxConnections to see if this fixes the problem. This solved the "connection refused" issue? > I noticed in the log that there was an error from a curl statement that said 'Error: Solr core is loading' This is weird. Solr usually don't just reload cores. Are you

Re: Failed to connect to server

2020-01-16 Thread Edward Ribeiro
A regular update is a delete followed by an indexing of the document. So technically both are indexes. :) If there's an atomic update ( https://lucene.apache.org/solr/guide/8_4/updating-parts-of-documents.html ), Solr would throw some sort of version conflict exception like {"error":{ "metadat

Re: Error while updating: java.lang.NumberFormatException: empty String

2020-01-16 Thread Edward Ribeiro
Hi, There is a status_code in the JSON snippet and it is going as a string with single space. Maybe it is an integer? Best, Edward On Thu, Jan 16, 2020 at 2:06 PM rhys J wrote: > While updating my Solr core, I ran into a problem with this curl statement. > > When I looked up the error, the on

Re: How do I add multiple values for same field with DIH script?

2020-01-16 Thread Edward Ribeiro
Hi, Are you sure content_text is a multivalued field (i.e., field definition has multiValued="true" in managed-schema)? Edward Em qui, 16 de jan de 2020 08:42, O. Klein escreveu: > row.put('content_text', "hello"); > row.put('content_text', "this is a test"); > return row; > > will only retur

Re: remote debugging for docker solr

2020-01-13 Thread Edward Ribeiro
Hi, I was able to connect my IDE to Solr running on a container by using the following command: command: > bash -c "solr start -c -f -a -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005;" It starts SolrCloud ( -c ) and listens on foreground ( -f ) so you don't need to r

Re: Search phrase not parsed properly

2020-01-10 Thread Edward Ribeiro
I would follow Shawn's advice: remove the stopword filter from your field types' analysis chain... Up to you, as usual. Best, Edward Em sex, 10 de jan de 2020 23:12, chester escreveu: > Thanks, everyone. I found the stopword_en.txt file and saw that "will" was > included in there. I removed it

Re: Search phrase not parsed properly

2020-01-10 Thread Edward Ribeiro
ma#L724 Here is how to use Solr Admin UI to test the analysis chain of your fields: https://lucene.apache.org/solr/guide/8_3/analysis-screen.html#analysis-screen Edward Em sex, 10 de jan de 2020 22:36, Edward Ribeiro escreveu: > You have to check your managed-schema to see if the

Re: remote debugging for docker solr

2020-01-10 Thread Edward Ribeiro
Could you share the content of your docker-compose.yml file? Did you export the 5005 port in the cited file YAML file? Best, Edward Em sex, 10 de jan de 2020 20:43, Arnold Bronley escreveu: > Hi, > > I have a running dockerized instance of Solr which runs fine with the > following setting for

Re: Search phrase not parsed properly

2020-01-10 Thread Edward Ribeiro
You have to check your managed-schema to see if the field type defines a stopwordfilter and which one it points to. There's a folder named 'lang' with many files, one for each language. If your field is configured to english the filter will point to lang/stopword_en.txt. The stopwords.txt file is

Re: Search phrase not parsed properly

2020-01-10 Thread Edward Ribeiro
Hi, It looks like you are using the stopword filter and 'will' is a stop word, so it is removed by the the analysis chain of the field. Please, test the analysis chain in Solr Admin UI to see if this is the case. Best, Edward Em sex, 10 de jan de 2020 21:30, chester escreveu: > I'm using solr

Re: Edismax ignoring queries containing booleans

2020-01-10 Thread Edward Ribeiro
> > On Fri, 10 Jan 2020 at 10:46, Edward Ribeiro > wrote: > > > The fq is not affected by mm parameter because it uses Solr's default > query > > parser (LuceneQueryParser) that doesn't support it. But you can change > the > > parser used by fq this way

Re: Edismax ignoring queries containing booleans

2020-01-09 Thread Edward Ribeiro
t the case here). Please, let me know if any of the suggestions, or any other you come up with, solve the issue and don't forget to test those approaches so that you can avoid any performance degradation. Best, Edward On Fri, Jan 10, 2020 at 1:41 AM Edward Ribeiro wrote: > Hi Claire, >

Re: Edismax ignoring queries containing booleans

2020-01-09 Thread Edward Ribeiro
recordID:[19 TO 19]\n", > "F73CFBC7-2CD2-4aab-B8C1-9D19D427EAFB":"\n1.0 = sum of:\n 1.0 = sum of:\n1.0 = recordID:[20 TO 20]\n"}, > > The only visual difference I think is the ~2 which came after the initial part of the parsed query: > > Old Query

Re: Edismax ignoring queries containing booleans

2020-01-08 Thread Edward Ribeiro
> "explain":{}, > "QParser":"ExtendedDismaxQParser", > "altquerystring":null, > "boost_queries":null, > "parsed_boost_queries":[], > "boostfuncs":[""], > "timing&

Re: Edismax ignoring queries containing booleans

2020-01-06 Thread Edward Ribeiro
Hi Claire, You can add the following parameter `&debug=all` on the URL to bring back debugging info and share with us (if you are using the Solr admin UI you should check the `debugQuery` checkbox). Also, if you are searching a sequence of values you could perform a range query: recordID:[18 TO 2

Re: understanding solr metrics

2020-01-02 Thread Edward Ribeiro
Just adding some tidbits of info to Jason's answer: meanRate measures the mean rate of event (requests) since the timer got created. See: https://metrics.dropwizard.io/3.1.0/apidocs/com/codahale/metrics/Timer.html#getMeanRate-- Particularly, I don't think this metric is all that meaningful for mon

Re: ConcurrentModificationException in SolrInputDocument writeMap

2019-11-07 Thread Edward Ribeiro
You probably hit https://issues.apache.org/jira/projects/SOLR/issues/SOLR-8028 Regards, Edward Em qua, 6 de nov de 2019 13:23, Mikhail Khludnev escreveu: > Hello, Tim. > Please confirm my understanding. Does exception happens in standalone Java > ingesting app? > If, it's so, Does it reuse ei

Re: avgFieldLength calculation in BM25 in solr 7.4

2019-10-31 Thread Edward Ribeiro
Hi, Looking at the source code and term frequencies, it looks like: fieldLength = number of tokens prior to ngram filter processing avgFieldLength = / As you are using n-gram then 11 is the total number of terms while fieldLegth is 2. See: https://lucene.apache.org/core/7_4_0/core/org/apache

Re: regarding Extracting text from Images

2019-10-26 Thread Edward Ribeiro
No. You should install tesseract-ocr on the same box your Solr instance is, and configure Solr so that embedded Tika is able to use Tesseract to do the ocr of images. Best, Edward Em qua, 23 de out de 2019 20:08, suresh pendap escreveu: > Hi Alex, > Thanks for your reply. How do we integrate te

Re: Solr status showing wrong data

2019-09-24 Thread Edward Ribeiro
Yup, it looks like the percentage over Xmx. See https://github.com/apache/lucene-solr/blob/branch_7_1/solr/core/src/java/org/apache/solr/handler/admin/SystemInfoHandler.java#L300-L317 double percentUsed = ((double)(used)/(double)max)*100; where max = runtime.maxMemory(); And maxMemory is as see

Re: QTime

2019-07-12 Thread Edward Ribeiro
Yeah, for network latency I would recommend a tool like charlesproxy. Edward Em qui, 11 de jul de 2019 20:59, Erick Erickson escreveu: > true, although there’s still network that can’t be included. > > > On Jul 11, 2019, at 5:55 PM, Edward Ribeiro > wrote: > > > &

Re: QTime

2019-07-11 Thread Edward Ribeiro
Wouldn't be the case of using &rows=0 parameter on those requests? Wdyt? Edward Em qui, 11 de jul de 2019 14:24, Erick Erickson escreveu: > Not only does Qtime not include network latency, it also doesn't include > the time it takes to assemble the docs for return, which can be lengthy > when r

Re: SolrCloud limitations?

2019-05-13 Thread Edward Ribeiro
Just an addendum to Erick's answer: you can see also the possibility of using different replica types like TLOG or PULL. It will depend on your use case and performance requirements. See https://lucene.apache.org/solr/guide/7_7/shards-and-indexing-data-in-solrcloud.html Best, Edward On Mon, May 1

Re: Softer version of grouping and/or filter query

2019-05-13 Thread Edward Ribeiro
rds, Edward On Fri, May 10, 2019 at 6:09 PM Doug Reeder wrote: > Thanks much! I dropped price from the fq term, changed to an edismax > parser, and boosted with > bq=price:[150+TO+*]^100 > > > > On Thu, May 9, 2019 at 7:21 AM Edward Ribeiro > wrote: > >

Re: Softer version of grouping and/or filter query

2019-05-09 Thread Edward Ribeiro
Em qua, 8 de mai de 2019 18:56, Doug Reeder escreveu: > > Similarly, we have a filter query that only returns products over $150: > fq=price:[150+TO+*] > > Can this be changed to a q or qf parameter where products less than $150 > have score less than any product priced $150 or more? (A price hig

Re: Is anyone using proxy caching in front of solr?

2019-02-25 Thread Edward Ribeiro
Maybe you could add a length filter factory to filter out queries with 2 or 3 characters using https://lucene.apache.org/solr/guide/7_4/filter-descriptions.html#FilterDescriptions-LengthFilter ? PS: this filter requires a max length too. Edward Em qui, 21 de fev de 2019 04:52, Furkan KAMACI esc

Re: by: java.util.zip.DataFormatException: invalid distance too far back reported by Solr API

2019-01-30 Thread Edward Ribeiro
Probably one of the PDFs is corrupted. As you are writing the routine to upload them try to isolate those who are throwing the exception. Regards, Edward Em qua, 30 de jan de 2019 17:49, Monique Monteiro Hi all, > > I'm writing a Python routine to upload thousands of PDF files to Solr, and > aft

Re: Solr Cloud Issue

2019-01-21 Thread Edward Ribeiro
AFAIK, to start as SolrCloud you should add "-c" switch option as below: bin/solr start -c -z 192.168.1.6:2181,192.168.1.7:2181, 192.168.1.102:2181/solr Otherwise, you'll be starting as standalone Solr instance. Edward On Mon, Jan 21, 2019 at 3:15 PM wrote: > Hello, > > I am configuring the

Re: Starting optimize... Reading and rewriting the entire index! Use with care

2018-12-26 Thread Edward Ribeiro
Optimize is an expensive operation. It will cost you 2x disk space, plus CPU and RAM. It is usually advisable not to optimize unless you really need to, and do not optimize frequently. Whether this can impact the server and search depends on the index size and hardware specification. See more here

Re: Soft commit and new replica types

2018-12-14 Thread Edward Ribeiro
on the > > leader. > > > > > Followers fetch the segments and **reload the core** every 150 > > seconds > > > > > > > > Edward, "reload" shouldn't really happen in regular TLOG/PULL > fetches. > > Are > > > > you seein

Re: Soft commit and new replica types

2018-12-13 Thread Edward Ribeiro
gt; > Ah, right. Ignore my comment. Commit will only occur on the followers > > when there are new segments to pull down, so your'e right, roughly > > every second poll would commit find things to bring down and open a > > new searcher. > > On Sun, De

Re: Soft commit and new replica types

2018-12-09 Thread Edward Ribeiro
effect with TLOG - PULL collection, > > I suppose, I have to have : 30 > > (yes, I understand that newSearchers start asynchronously on leader and > replicas) > > Am I right? > > -- > > Vadim > > > > > >> -Original Message- > >&g

Re: Soft commit and new replica types

2018-12-08 Thread Edward Ribeiro
Some insights in the new replica types below: On Sat, December 8, 2018 08:42, Vadim Ivanov < vadim.iva...@spb.ntk-intourist.ru wrote: > > From Ref guide we have: > " NRT is the only type of replica that supports soft-commits..." > "If TLOG replica does become a leader, it will behave the same as

Re: Error when loading configset

2018-12-04 Thread Edward Ribeiro
By the default, ZooKeeper's znode maximum size limit is 1MB. If you try to send more than this then an error occurs. You can increase this size limit but it has to be done both on server (ZK) and client (Solr) side. See this discussion for more details: http://lucene.472066.n3.nabble.com/How-to-s

Re: Solr Setup using NRT and PULL replicas

2018-12-02 Thread Edward Ribeiro
To mix NRT and TLOG/PULL replicas is not recommended. It is all NRT nodes or TLOG nodes mixed (or not) with PULL replicas. As you know, all PULL replicas is not possible. According to the talk below, one of the reasons is that if you have NRT mixed with TLOG and PULL replicas then a leadership cha

Re: Query regarding Dynamic Fields

2018-11-27 Thread Edward Ribeiro
You should provide the full name of the dynamic field in the query like q=s_myfield:foo, for example. Solr doesn't allow field prefix queries like q=s_*:foo. Edward Em 27 de nov de 2018 12:08, "jay harkhani" escreveu: Hello All, We are using dynamic fields in our collection. We want to use i

Re: Manage new nodes types limit

2018-11-27 Thread Edward Ribeiro
Idk if you can promote a replica from PULL to TLOG, for example. You could accomplish this deleting then adding the replica, imho. Also, when adding a replica you can specify the type parameter (nrt, pull, tlog), see https://lucene.apache.org/solr/guide/7_4/collections-api.html#addreplica Edward

Re: Solr Cloud configuration

2018-11-20 Thread Edward Ribeiro
Hi David, Well, as a last resort you can resort to classic schema.xml if you are using standalone Solr and don't bother to give up schema API. Then you are back to manually editing conf/ files. See: https://lucene.apache.org/solr/guide/7_4/schema-factory-definition-in-solrconfig.html Best regard

Re: Solr cache clear

2018-11-20 Thread Edward Ribeiro
Disabling or reducing autowarming can help too, in addition to cache size reduction. Edward Em ter, 20 de nov de 2018 17:29, Erick Erickson Why would you want to? This sounds like an XY problem, there's some > problem you think would be cured by clearing the cache. What is > that problem? > > B

Re: Solr statistics

2018-11-20 Thread Edward Ribeiro
You could try to use that function in a stats field. Edward On Tue, Nov 20, 2018 at 9:24 AM Anil wrote: > > Thanks Edward. > > Can we find stats on two fields ? (eg - sum = sum of (userful+not useful)) ? > > On Tue, 20 Nov 2018 at 16:14, Edward Ribeiro > wrote: > >

Re: Solr statistics

2018-11-20 Thread Edward Ribeiro
You are using a function query as stats.field and as seen here: https://lucene.apache.org/solr/guide/7_4/function-queries.html the syntax for termfreq is termfreq(field_name, value). You're using termfreq('num_not_useful','num_useful'). It looks like num_useful is a numeric (int, float) type in you

Re: Question about elevations

2018-11-19 Thread Edward Ribeiro
Just complementing Alessandro's answer: 1. the elevateIds are inserted into the query, server side (a query expansion indeed); 2. the query is executed; 3. elevatedIds (if found) are popped up to the top of the search results via boosting; Edward On Mon, Nov 19, 2018 at 3:41 PM Alessandro Benedet

Re: Sort index by size

2018-11-19 Thread Edward Ribeiro
One more tidbit: are you really sure you need all 20 fields to be indexed and stored? Do you really need all those 20 fields? See this blog post, for example: https://www.garysieling.com/blog/tuning-solr-lucene-disk-usage On Mon, Nov 19, 2018 at 1:45 PM Walter Underwood wrote: > > Worst case is

Re: Soft commits and new Searcher

2018-11-19 Thread Edward Ribeiro
Hi Walter, A searcher has an immutable (stale) view of the index of when it was created. Therefore, a soft commit always open a new searcher, because this new searcher will reflect the changes in the index since the last commit. When you are doing a hard commit you have the option of not opening t

Re: Solr IndexSearcher lifecycle

2018-10-28 Thread Edward Ribeiro
On Fri, Oct 26, 2018 at 10:38 AM Xiaolong Zheng < xiaolong.zh...@mathworks.com> wrote: Hi, But when things come to Solr world which in a Java Webapp with servlet dispatcher. Do we also keep reusing the same IndexSearcher instance as long as there is no index changing? Yes. The IndexSearcher is

Re: UUIDField in defined schema

2018-08-13 Thread Edward Ribeiro
Go to the solrconfig.xml and replace the schemaless mode opening tag of named "add-unknown-fields-to-the-schema" by this one: It will auto-generate the UUID. If you want to use a UUIDField instead of string for the uniqueKey (id) then make the changes below in managed-schema: Edward On Wed

Re: FieldValueCache in solr 6.6

2018-07-25 Thread Edward Ribeiro
FieldValueCache is used by faceting, mostly. So, you would need to execute warm faceting queries to pre-populate it. More info in this old mailing list topic: http://lucene.472066.n3.nabble.com/Loading-data-to-FieldValueCache-tp4175721.html Cheers, Edward Em qua, 11 de jul de 2018 02:09, zhang.m

Re: How to know index file in OS Cache

2015-09-25 Thread Edward Ribeiro
You can use pcstat ( https://github.com/tobert/pcstat ) to get page cache statistics for files. I have used this app in the past to see which and how much Lucene indexes were on Linux page cache. Edward On Fri, Sep 25, 2015 at 2:22 PM, Jeff Wartes wrote: > > > I’ve been relying on this: > http

Re: Indexed & stored

2015-08-13 Thread Edward Ribeiro
My two cents, If anything else, declaring them as indexed="true" and stored="true" helps to auto document the schema and make its options explicit. Best, Eddie On Thu, Aug 13, 2015 at 12:07 PM, Erick Erickson wrote: > No. But how do they default to "true"? In the fieldType? Which will be > po

Re: Hard Commit not working

2015-07-30 Thread Edward Ribeiro
lanki wrote: > Hi Edwards, > I am only sending 1 document for indexing then why it is > committing instantly. I gave to 6. > > On Thu, Jul 30, 2015 at 8:26 PM Edward Ribeiro > wrote: > > > Your is set to 1. This is the number of pending docs before

Re: Hard Commit not working

2015-07-30 Thread Edward Ribeiro
Your is set to 1. This is the number of pending docs before autocommit is triggered too. You should set it to a higher value like 1, for example. Edward Em 30/07/2015 11:43, "Nitin Solanki" escreveu: > Hi, >I am trying to index documents using solr cloud. After setting, > to 6