Hello Shawn, Erick,
I thought about that too, but dismissed it, other similar batched processes
don't show this problem. Nonetheless i reset cumulativeAdds and watched a batch
being indexed, it got indexed twice!
Thanks!
Markus
-Original message-
> From:Erick Erickson
> Sent: Wednesd
Very likely I'm late to this party :) not sure with solr standalone, but
with solrcloud (7.3.1) you have to reload the core every time synonyms
referenced by a schema are changed.
On Mon, Nov 26, 2018 at 8:51 PM Walter Underwood
wrote:
> Should be easy to check with the analysis UI. Add a synony
Dear Solr Team,
In my SolrCloud cluster, I am using 3 Zookeper External ensemble and 2 Solr
Instance.
I already created Collection using the PORT 8983 and It has more data.
Now I want to enable SSL.
As per your help document, I
On 11/28/2018 6:37 AM, Vincenzo D'Amore wrote:
Very likely I'm late to this party :) not sure with solr standalone, but
with solrcloud (7.3.1) you have to reload the core every time synonyms
referenced by a schema are changed.
I have a 7.5.0 download on my workstation, so I fired that up, creat
Hi John,
I'm not an expert on TRA, but I don't think so. The TRA functionality
I'm familiar with involves creating and deleting underlying
collections and then routing documents based on that information. As
far as I know that happens at the UpdateRequestProcessor level - once
your data is index
For all those who wanted to be at the conference for the talks :-) but
could not:
https://www.youtube.com/watch?v=Hm98XL0Mw5c&list=PLU6n9Voqu_1HW8-VavVMa9lP8-oF8Oh5t
(Plug) Mine was: "JSON in Solr: from top to bottom", video at:
https://www.youtube.com/watch?v=WzYbTe3-nFI , slides at:
https://www.
Thanks Alex, and thanks to everyone who was part of organizing the
conference!
On Wed, Nov 28, 2018 at 12:28 PM Alexandre Rafalovitch
wrote:
> For all those who wanted to be at the conference for the talks :-) but
> could not:
>
> https://www.youtube.com/watch?v=Hm98XL0Mw5c&list=PLU6n9Voqu_1HW8-
I noticed some were out a few days ago, but I don't think they're all there
yet (mine isn't)
On Wed, Nov 28, 2018 at 12:46 PM Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:
> Thanks Alex, and thanks to everyone who was part of organizing the
> conference!
>
> On Wed, Nov 28, 2018 at
Hi,
I have deployed Solr 7.2 in a staging server in standalone mode. I want to
move it to the production server.
I would like to know whether I need to run the indexing process again or is
there any easier way to move the existing index?
I went through this documentation but I couldn't figure ou
you just set up the solr install on the production server as a slave to
your current install and hit the replicate button from the admin interface
on the production server
On Wed, Nov 28, 2018 at 1:34 PM Arunan Sugunakumar
wrote:
> Hi,
>
> I have deployed Solr 7.2 in a staging server in standalo
Time for import - 5-6 minutes
Warmup time - 40seconds
autoCommit and autoSoftCommit setting both disabled and We fire commit only
after the import is completed.
I have some more doubt
1. In case of master-slave, is auto warm strategy is available for slaves
2. Also should I also have a limit
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Erick,
On 11/27/18 20:47, Erick Erickson wrote:
> And do note one implication of the link Shawn gave you. Now that
> you've optimized, you probably have one huge segment. It _will not_
> be merged unless and until it has < 2.5G "live" documents. So
Hi,
I have an index that is built using a combination of fields (Title,
Description, Phone, Email etc). I have an indexed all the fields and the
combined copy field as well.
In the query that i have which is a combination of all the fields as input
(Title + Description+Phone+email).
There are som
Arunan Sugunakumar wrote:
> https://lucene.apache.org/solr/guide/6_6/making-and-restoring-backups.html
We (also?) prefer to keep our stage/build setup separate from production.
Backup + restore works well for us. It is very fast, as it is basically just
copying the segment files.
- Toke Eskil
The terminology we use at my company is you want to *gate* the effect of
boost to only very precise scenarios. A lot of this depends on how your
Email and Phone numbers are being tokenized/analyzed (ie what analyzer is
on the field type), because you really only want to boost when you have
high con
Ah yes, this is a common gotcha, its because the bq is recursively applied
to itself
So you have to change that bq to have itself a bq that's empty
bq={!edismax bq='' mm=80% qf=Email^100 v=$q}
v is simply the 'q' for this subquery, by passing v=$q you explicitly set
it to whatever was passed in
Hi Doug,
Thank you for your response. I tried the above boost syntax but I get the
following error of going into an infinite loop. In the wiki page I couldnt
figure out what the 'v' parameter is. (
https://lucene.apache.org/solr/guide/7_0/the-extended-dismax-query-parser.html).
I will try the ana
Hi,
I use the following http request to start solr index optimization:
http://localhost:8983/solr//update?skipError=true -F stream.body='
'
The request returns status code 200 shortly, but when looking at the solr
instance I noticed that actual optimization has not completed yet as there
are
Hi,
How big is your index size, and do you have enough space in your disk to do
the optimization? You need at least twice the disk space in order for the
optimization to be successful, and even more if you are still doing
indexing during the optimization.
Also, which Solr version are you using?
Why do you think you need to optimize? Most configurations don’t need that.
And no, there is not synchronous optimize request.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Nov 28, 2018, at 6:50 PM, Zheng Lin Edwin Yeo wrote:
>
> Hi,
>
> How big
Hi,
Have you tried with the steps in this document?
https://lucene.apache.org/solr/guide/7_5/enabling-ssl.html
This is from the guide in the latest Solr 7.5.0 version. Which version are
you using?
Regards,
Edwin
On Wed, 28 Nov 2018 at 22:41, Tech Support wrote:
> Dear Solr Team,
>
>
>
>
>
> I
To whom it might concern,
Recently, I am studying if Apache Solr able to re-index (Full Import /
Delta Import) periodically by configuration instead of triggering by URL (
e.g.
http://localhost:8983/solr/{collection_name}/dataimport?command=full-import
) in scheduler tool.
Version of the Solr usi
22 matches
Mail list logo