Hi all,
I see I can easily create/restore backups of the entire collection
https://cwiki.apache.org/confluence/display/solr/Collections+API.
I now have a situation where these backups fill up a disk, so I need to get
rid of some. On the URL above I don't see an API call to delete a backup.
What
Dear Solr gurus,
I'm having hard time using block-join queries on nested documents with
multi-select facets. We currently index products with variations as
nested-documents as the following:
Product-1: t-shirt => brand:Nike doc_type:0 ...
- SKU-A => size:S color:blue doc_type:1 text-fields:"small
Hello, Moenieb.
It'd worth to mention that it's not effective to include, java-user@ in
this thread.
Also, this proposal is purposed DIH, that's worth to be mentioned in subj.
Then, this config looks like it will issue solr request per every parent
row that's deadly inefficient.
On Wed, Jan 25,
> On Jan 25, 2017, at 5:19 PM, Shawn Heisey wrote:
>
> It seems that Lucene/Solr
> creates a lot of references as it runs, and collecting those in parallel
> offers a significant performance advantage.
This is critical for any tuning. Most of the query time allocations in Solr
have the lifetim
On 1/25/2017 12:04 PM, Greg Harris wrote:
> I think my experience to this point is G1 (barring unknown lucene bug
> risk) is actually a lower risk easier collector to use. However that
> doesn't necessarily mean better. You don't have set the space sizes or
> any number of all sorts of various para
Did anyone figured out a solution for this? Ran into same issue when I
upgraded to 6.4 from 6.2.1. DIH works perfectly fine in 6.2.1
Tried out of the box example and donot see DIH in the example cloud module
as well.
--
View this message in context:
http://lucene.472066.n3.nabble.com/no-dataim
On 1/25/2017 4:06 PM, Dan Scarf wrote:
> I upgraded Solr 6.3.0 this morning to 6.4.0. All seemed good according to
> the logs but this afternoon we discovered that the DataImport tabs in our
> Collections now say:
>
> 'Sorry, no dataimport-handler defined!'.
This is a bug that only applies to 6.4
On 1/24/2017 5:43 AM, Chris Rogers wrote:
> Having frustrating issues with getting SOLR 6.4.0 to recognize the existence
> of my DIH config. I’m using Oracle Java8 jdk on Ubuntu 14.04.
>
> The DIH .jar file appears to be loading correctly. There are no errors in the
> SOLR logs. It just says “Sor
Hi,
We're working on rolling out a far newer instance of Solr using SolrCloud
than what we currently have in Production.
In our Dev environment, we had set up Solr 6.3.0. Over the last couple of
weeks, the Ops team has been test driving SolrCloud and the
resiliency/failover, while our Dev team ha
DIH is not multi-threaded, and so the idea of "queueing" up requests is a
misnomer. You might be better off using something other than
DataImportHandler.
LogStash can pull what it calls "events" from a database and then push them
into Solr, and you have some of the same row transformation capa
What we do is :
Run URL to delete *:*, but do not commit.
1. Kick off indexing on DIH1, clean=false, commit=false.
2. Kick off indexing on DIH2, clean=false, commit=false
Then we manually commit.
On Wed, Jan 25, 2017 at 2:57 PM, Nkeet Shah
wrote:
> Hi,
> I have a multi-thread application that
Hi,
I have a multi-thread application that makes DIH request to perform indexing.
What I could not gather from the documentation is that does DIH requests are
queued up.
In essence if a made a request to say DIH1 and it has accepted the request and
is working on the indexing. What would happen
Hah, interesting.
The fact that the CMS collector fails back to a *single-threaded* collection on
concurrent-mode-failure had me seriously considering trying the Parallel
collector a year or two ago. I figured out (and stopped) the queries that were
doing the sudden massive allocations that wer
Has anybody done this? Not for long term use of course, but does it work well
enough
for a rolling upgrade?
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
Just to chime in a few quick thoughts
I think my experience to this point is G1 (barring unknown lucene bug risk)
is actually a lower risk easier collector to use. However that doesn't
necessarily mean better. You don't have set the space sizes or any number
of all sorts of various parameters
It might be possible by sticking additional update request processors
before the signature one. For example clone field, regex instead of
tokenizing on the clone, then signature. If a clone is too much of a
burden, it may even be possible to then add IgnoreField URP to remove
it or map it in the sc
Hello,
This is not possible out of the box, you would need to manually pass the input
through an analyzer with a tokenizer and your steming token filter, and put the
output together again.
Markus
-Original message-
> From:Leonidas Zagkaretos
> Sent: Wednesday 25th January 2017 17
Hi all,
We have successfully integrated Solr in our application, and now we are
facing a requirement where the application should be able to search for
duplicate records in Solr core based on equality in 3 distinct fields.
Tried using SignatureUpdateProcessorFactory as described in
https://cwiki.
Hi,
I'm trying out the Streaming Expressions in Solr 6.3.0.
Currently, I'm facing the issue of not being able to get the fields in the
result-set to be displayed in the same order as what I put in the query.
For example, when I execute this query:
http://localhost:8983/solr/collection1/stream?
19 matches
Mail list logo