Sure Mikhail,
One question on this, If we wish to do this by creating a custom
update processor for handling this, do we also need to check for race
conditions?
On Mon, Dec 3, 2018 at 8:21 PM Mikhail Khludnev wrote:
>
> So far, wipe whole block insert one as whole.
>
> On Mon, Dec 3, 2018 at 5:13
You can add, say, a ScriptUpdateProcessor that checks this for you
pretty easily.
Have you looked at the Overwrite=false option (assuming you're not
assigning _version_ yourself)?
Best,
Erick
On Mon, Dec 3, 2018 at 11:57 AM lstusr 5u93n4 wrote:
>
> Hi All,
>
> I have a scenario where I'm trying
And I forgot to mention TolerantUpdateProcessor, might be another approach.
On Mon, Dec 3, 2018 at 12:57 PM Erick Erickson wrote:
>
> You can add, say, a ScriptUpdateProcessor that checks this for you
> pretty easily.
>
> Have you looked at the Overwrite=false option (assuming you're not
> assign
Safest is to create a new collection, then shut it down. Now copy all the
indexes to the index sir for the corresponding replicas and start Solr back
up.
On Mon, Dec 3, 2018, 11:12 Rekha Hi to all,My PC is physically damaged, so my Solr cloud instance is
> down.My Solr cloud instance data path is
Hi All,
I have a scenario where I'm trying to enable batching on the solrj client,
but trying to see how that works with Optimistic Concurrency.
>From what I can tell, if I pass a list of SolrInputDocument to my solr
client, and a document somewhere in that list contains a `_version_` field
that
Hi to all,My PC is physically damaged, so my Solr cloud instance is down.My
Solr cloud instance data path is a network shared drive. So that all the
indexed data are available. How to use/recover that data by using the
other / alternate Solr cloud instance. Thanks, Rekha Karthick
Hi Solr Team,
When I try to Implement the SSL for the Multi-Node SOLR Cloud Cluster.
Below in my SOLR CLOUD Cluster Environment,
* 3 Zookeeper PC running on the version 3.4.13 - On Windows OS
* 2 Solr Cloud Nodes running on the version 7.5.0 - on Windows OS.
Each
Others from the bin/solr script. Note that some are optional (JMX).
But to emphasize what Jan said: All these are configurable so you need
to make sure that whoever set up your system doesn't set these to
something else.
echo " -p Specify the port to start the Solr HTTP
listener on; defa
Actually, just to correct myself. Solr uses configset in two different
ways (very unfortunate):
1) When you do bin/solr create -c name -d configset, in which case the
content of configset directory is copied
2) When you actually link to a configset as a common configuration, in
which case I think n
I am not sure I fully understand what you are saying.
When you create a collection based on a configset, all the files
should be copied, including the stopwords.
You can also provide an absolute path.
Solr also supports variable substitutions (as seen in solrconfig.xml
library statements), but I
Yeah, but if i define them in the schema of configset, The custom file with
stopwords is in a directory relative to the collection and not in configset.
So is there a way to define a path to stopwords with the collection as a
variable?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-
The stopwords are defined at the field type level as part of the
analyzer chain. So, you have per-field granularity. Not just
per-collection.
As stop-words are using files (though we have managed version as well,
you can share or not-share as much as you want even across different
field type defin
Im using Solr standalone and I want to use shared stopwords and custom
stopwords per collection. Is this possible?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
So far, wipe whole block insert one as whole.
On Mon, Dec 3, 2018 at 5:13 PM Lucky Sharma wrote:
> Hi,
> Thanks, Mikhail.
> But what about the suggestions for v6.6+?
>
> Regards,
> Lucky Sharma
> On Mon, Dec 3, 2018 at 7:07 PM Mikhail Khludnev wrote:
> >
> > Hi,
> > This might be improved in 8.
Hi,
Thanks, Mikhail.
But what about the suggestions for v6.6+?
Regards,
Lucky Sharma
On Mon, Dec 3, 2018 at 7:07 PM Mikhail Khludnev wrote:
>
> Hi,
> This might be improved in 8.0
> see https://issues.apache.org/jira/browse/SOLR-5211
>
> On Mon, Dec 3, 2018 at 2:39 PM Lucky Sharma wrote:
>
> > H
Hi Alex & Andrea for the reply.
But Alex, Our main idea was to reduce network latency, since the only
processing needed is only i/p to the next call, which is totally Solr
params, like facets, sorting query etc. Thats the reason I am looking
for the same.
Thanks, Andrea but in my case the cores ar
Hi all,
I just noticed this and I just wanted to share with you:
Full-text search is everywhere nowadays and FOSDEM 2019 will have a dedicated
devroom for search on Sunday the 3rd of February.
We would like to invite submissions of presentations from developers,
researchers, and users of ope
Hi,
This might be improved in 8.0
see https://issues.apache.org/jira/browse/SOLR-5211
On Mon, Dec 3, 2018 at 2:39 PM Lucky Sharma wrote:
> Hi,
> I have a query regarding block join update,
> As far as I know, we cannot update the single doc of a block, we have
> to delete the complete block and
Hi,
What Alexander said is right, but if in your scenario you would still go
for that, you could try this [1], that should fit your need.
Best,
Andrea
[1] https://github.com/SeaseLtd/composite-request-handler
On Mon, 3 Dec 2018, 13:26 Alexandre Rafalovitch You should not be exposing Solr direc
You should not be exposing Solr directly to the client, but treating
it more as a database.
Given that, why would you not write your processing code in that
middle-ware layer?
Regards,
Alex.
On Mon, 3 Dec 2018 at 06:43, Lucky Sharma wrote:
>
> Hi have one scenario,
> where I need to make a se
Hi Jan,
Thank you.
To summarize we need to open these ports within the cluster:
8983
2181
2888
3888
Regards,
Moshe Recanati
CTO
Mobile + 972-52-6194481
Skype : recanati
More at: www.kmslh.com | LinkedIn | FB
-Original Message-
From: Jan Høydahl
Sent: Monday, December 3, 2018 12
Hi have one scenario,
where I need to make a sequential call to the solr. Both the requests
are sequential and output of the first will be required to set some
params of the second search request.
So for such scenarios, I am writing a plugin which will internally
handle both the requests and gives
Hi,
I have a query regarding block join update,
As far as I know, we cannot update the single doc of a block, we have
to delete the complete block and reindex it again.
Please clarify, If there is something wrong in my understanding.
So for the update in either parent or child, is it recommended to
Hi Joel,
Thanks for the info :)
On Wed, Nov 14, 2018 at 8:13 PM Joel Bernstein wrote:
>
> The implementation is as follows:
>
> 1) There are "stream sources" that generate results from Solr Cloud
> collections. Some of these include: search, facet, knnSearch, random,
> timeseries, nodes, sql etc..
Hi
This depends on your exact coniguration, so you should ask the engineers who
deployed ZK and Solr, not this list.
If default solr port is used, you'd need at least 8983 open between servers and
from the app server to the cluster.
If default zk port is used, you'd need port 2181 open between
Hi Danilo,
you have to give more infos about your system and the config.
- 30gb RAM (physical RAM?) how much heap do you have for JAVA?
- how large (in GByte) are your 40 million raw data being indexed?
- how large is your index (in GByte) with 40 million docs indexed?
- which version of Solr an
Hello all,
We have a configuration with a single node with 30gb of RAM.
We use it to index ~40MLN of documents.
We perform queries with edismax parser that contain often edismax parser
subqueries with the syntax
'_query_:{!edismax mm=X v=$subqueryN}'
Often X == 1.
This solves the "too many
Hi,
We're currently running SolrCloud with 3 servers: 3 ZK and 3 Search Engines.
Each one on each machine.
Our security team would like to open only the required ports between the
servers.
Please let me know which ports we need to open between the servers?
Thank you
Regards,
Moshe Recanati
CTO
28 matches
Mail list logo