Hi,
I have a fieldtype "suggestion" with definition as :
**
* *
**
**
* *
* *
* *
**
**
* *
**
I have a field named "mysuggestion" with definition as :
**
*I copy other field - names, countries, short description to
"mysuggestio
Then you'll have to scrub the data on the way in.
Or change the type to something like KeywordTokenizer and use
PatternReplaceCharFilter(Factory) to get rid of unwanted stuff.
Best,
Erick
On Wed, Jul 12, 2017 at 7:07 PM, Zheng Lin Edwin Yeo
wrote:
> The field which I am bucketing is indexed us
On 7/12/2017 8:25 PM, Zheng Lin Edwin Yeo wrote:
> Thanks Shawn and Erick.
>
> We are planning to migrate the data in two of the largest collections to
> another hard disk, while the rest of the collections remains at the default
> one for the main core.
> So there are already data indexed in the c
Try run second data import or any other indexing jobs after the replication of
the first data import is completed.
My observation is during the replication period (when there is docs in queue),
tlog clean up will not triggered. So when queue is 0, and submit second batch
and monitor the queue a
Thanks Shawn and Erick.
We are planning to migrate the data in two of the largest collections to
another hard disk, while the rest of the collections remains at the default
one for the main core.
So there are already data indexed in the collections.
Will this method work, or we have to create a ne
The field which I am bucketing is indexed using String field, and does not
pass through any tokenizers.
Regards,
Edwin
On 12 July 2017 at 21:52, Susheel Kumar wrote:
> I checked on 6.6 and don't see any such issues. I assume the field you are
> bucketing on is string/keywordtokenizer not text/a
glad to hear you found your solution! I have been combing over this post and
others on this discussion board many times and have tried so many tweaks to
configuration, order of steps, etc, all with absolutely no success in
getting the Source cluster tlogs to delete. So incredibly frustrating. If
1> I would not do this. First there's the lock issues you mentioned.
But let's say replica1 is your indexer and replicas2 and 3 point to
the same index. When replica1 commits, now do replicas 2 and 3 know to
open a new searcher?
<2> and <3> just seem like variants of coupling Solr instances to
col
I'm having difficulty finding the value for numFound that is in the response.
My context is a custom component in the last-components list for /select.
Where rb is the ResponseBuilder parameter for the process(..) method:
rb.getNumberDocumentsFound() is 0.
rb.totalHitCount is 0.
I don't unders
Thank you Shawn - we have some WARN messages like :
DFSClient
Slow waitForAckedSeqno took 31066ms (threshold=3ms)
and
DFSClient
Slow ReadProcessor read fields took 30737ms (threshold=3ms); ack:
seqno: 1 reply: SUCCESS reply: SUCCESS reply: SUCCESS
downstreamAckTimeNanos: 30735948057,
Hi,We are using some features like collapse and joins that force us not to use
sharding for now. Still i am checking for possibilities for load balancing and
high availability.1- I think about using many solr instances run against the
same shard file system. This way all instances will work with
I guess your certificates are self generated? In that case, this is a
browser nanny trying to protect you.
I also get same error in Firefox, however Chrome was little forgiving. It
showed me an option to choose my certificate (the client certificate), and
then bypassed the safety barrier.
I should
The two collection approach with aliasing is a good approach.
You can also use the backup and restore APIs -
https://lucene.apache.org/solr/guide/6_6/making-and-restoring-backups.html
Mike
On Wed, Jul 12, 2017 at 10:57 AM, Vincenzo D'Amore
wrote:
> Hi,
>
> I'm moving to Solr Cloud 6.x and I se
I am not using Zookeeper. Is the urlScheme also used outside of Zookeeper?
~~~
William Kevin Miller
ECS Federal, Inc.
USPS/MTSC
(405) 573-2158
-Original Message-
From: esther.quan...@lucidworks.com [mailto:esther.quan...@lucidworks.com]
Sent: Wednesday, July 12,
Hi William,
You should be able to navigate to https://local host:8983/solr (albeit with
your host:port) to access the admin UI, provided you updated the urlScheme
property in the Zookeeper cluster props.
Did you complete that step?
Esther
Search Engineer
Lucidworks
> On Jul 12, 2017, at 0
Hi,
I'm moving to Solr Cloud 6.x and I see rollback cannot be supported when is
in Cloud mode.
In my scenario, there are basically two tasks (full indexing, partial
indexing).
Full indexing
=
This is the most important case, where I really need the possibility to
rollback.
The full rei
Hi,
The MLT documentation says that for best results, the fields should have
stored term vectors in schema.xml, with:
My question: should I also create the TermVectorComponent and declare it in
the search handler?
In other terms, do I have to do this in my solrconfig.xml for best results?
I am trying to enable SSL and I have followed the instructions in the Solr 6.4
reference manual, but when I restart my Solr server and try to access the Solr
Admin page I am getting:
"This page isn't working";
sent an invalid response;
ERR_INVALID_HTTP_RESPONSE
Does the Solr server need to be
There's not much anybody can do unless you tell us what the problem
you're having is. What have you tried? Exactly _how_ does it not work?
Please read:
https://wiki.apache.org/solr/UsingMailingLists
Best,
Erick
On Wed, Jul 12, 2017 at 5:58 AM, Sweta Parekh wrote:
> Hi All,
>
> We are using Solr
Hi Sweta,
I recently adapted that patch to a Solr instance running version 6.4 . If my
memory does not fail me, I think the only changes I had to make were
updating the package imports for the last OpenNLP version (I am using
OpenNLP 1.8):
What problem are you struggling with, exactly?
Best,
Shawn's way will work, of course you have to be sure you didn't index
any data before editing all the core.properties files.
There's another way to set the dataDir per core though that has the
advantage of not entailing any down time or hand editing files:
> create the collection with createNodeSe
We have buffers disabled as described in the CDCR documentation. We also
have autoCommit set for hard commits, but openSearcher false. We also have
autoSoftCommit set.
On Tue, Jul 11, 2017 at 5:00 PM, Xie, Sean wrote:
> Please see my previous thread. I have to disable buffer on source cluster
>
On 7/12/2017 7:14 AM, Joe Obernberger wrote:
> Started up a 6.6.0 solr cloud instance running on 45 machines
> yesterday using HDFS (managed schema in zookeeper) and began
> indexing. This error occurred on several of the nodes:
> Caused by: org.apache.solr.common.SolrException: openNewSearcher
>
I checked on 6.6 and don't see any such issues. I assume the field you are
bucketing on is string/keywordtokenizer not text/analyzed field.
===
"facets":{
"count":5,
"myfacet":{
"buckets":[{
"val":"A\t\t\t",
"count":2},
{
"val":"L\t\t\t"
On 7/12/2017 7:20 AM, Nawab Zada Asad Iqbal wrote:
> I am wondering what is wrong if I pass both http and https port to
> underlying jetty sever , won't that be enough to have both http and https
> access to solr ?
Jetty should be capable of doing both HTTP and HTTPS (on different
ports), but th
On 7/12/2017 12:38 AM, Zheng Lin Edwin Yeo wrote:
> I found that we can set the path under in solrconfig.xml
>
> However, this seems to work only if there is one replica. How do we set it
> if we have 2 or more replica?
Setting dataDir in solrconfig.xml is something that really only works in
stan
Thanks rick
I am wondering what is wrong if I pass both http and https port to
underlying jetty sever , won't that be enough to have both http and https
access to solr ?
Regards
Nawab
On Wed, Jul 12, 2017 at 3:39 AM Rick Leir wrote:
> Hi all,
> The recommended best practice is to run a web ap
Started up a 6.6.0 solr cloud instance running on 45 machines yesterday
using HDFS (managed schema in zookeeper) and began indexing. This error
occurred on several of the nodes:
auto commit error...:org.apache.solr.common.SolrException: Error opening
new searcher
at org.apache.solr.core.
Hi All,
We are using Solr 6.6 and trying to integrate OpenNLP but are unable to do that
using patch lucene-2899. Can someone help or provide instructions for the same
Regards,
Sweta Parekh
Hi all,
The recommended best practice is to run a web app in front of Solr, and maybe
there is no benefit in SSL between the web app and Solr. In any case, if SSL is
desired, you would configure the web app to always use HTTPS.
Without the web app, you can have Apache promote a connection from
+1
I was trying to understand a reload collection time out happening lately in
a Solr Cloud cluster, and the Overseer Status was hard to decipher.
More Human Readable names and some additional documentation could help here.
Cheers
-
---
Alessandro Benedetti
Search Consultant,
31 matches
Mail list logo