I think CJKFoldingFilter will work for you. I put 舊小說 in index and then
each of A, B or C or D in query and they seems to be matching and CJKFF is
transforming the 舊 to 旧
On Fri, Jul 20, 2018 at 9:08 AM, Susheel Kumar
wrote:
> Lack of my chinese language knowledge but if you want, I can
03 4925
>
>
> On Fri, Jul 20, 2018 at 3:11 PM, Susheel Kumar
> wrote:
>
> > I think CJKFoldingFilter will work for you. I put 舊小說 in index and then
> > each of A, B or C or D in query and they seems to be matching and CJKFF
> is
> > transforming the 舊 to 旧
>
In usual circumstances when one Zookeeper goes down while others 2 are up,
Solr continues to operate but when one of the ZK machine was not reachable
with ping returning below results, Solr count't starts. See stack trace
below
ping: cannot resolve ditsearch001.es.com: Unknown host
Setup: Solr
om: Name or service not
> > known
> >
> > Is this address actually resolvable at the time?
> >
> > On Mon, Jul 23, 2018 at 3:46 PM, Susheel Kumar
> > wrote:
> >
> >> In usual circumstances when one Zookeeper goes down while others 2 are
> up,
>
hanks.
>
> On Tue, Jul 24, 2018 at 2:31 AM Susheel Kumar
> wrote:
>
> > Something messed up with DNS which resulted into unknown host exception
> for
> > one the machines in our env and caused Solr to throw the above exception
> >
> > Eric, I have the Solr
and as you suggested, use stop word before shingles...
On Fri, Aug 3, 2018 at 8:10 AM, Clemens Wyss DEV
wrote:
>
>
>
>outputUnigrams="true" tokenSeparator=""/>
>
>
> seems to "work"
>
> -Ursprüngliche Nachricht-
> Von: Clemens Wyss DEV
> Gesendet: Freitag, 3. August 2018 13
Hello,
Whats type of storage/volume is recommended to run Solr on Kubernetes POD?
I know in the past Solr has issues with NFS storing its indexes and was not
recommended.
https://kubernetes.io/docs/concepts/storage/volumes/
Thanks,
Susheel
245.76 245.76
> >
> > and we get very good performance.
> >
> > ultimately though it's going to depend on your workload
> >
> > From: Susheel Kumar
> > Sent: 06 February 2020 13:43
> > To: solr-user@lucene.apache.org
>
check if below directories have correct permission. solr.log file not
created implies some issue
tail: cannot open
'/home/pawasthi/projects/solr_practice/ex1/solr-8.4.1/example/cloud/node1/solr/../logs/solr.log'
Solr home directory
/home/pawasthi/projects/solr_practice/ex1/solr-8.4.1/example/clo
Basic auth should help you to start
https://lucene.apache.org/solr/guide/8_1/basic-authentication-plugin.html
On Mon, Mar 16, 2020 at 10:44 AM Ryan W wrote:
> How do you, personally, do it? Do you use IPTables? Basic Authentication
> Plugin? Something else?
>
> I'm asking in part so I'l have
Hello,
One of our Solr 6.6.2 DR cluster (target CDCR) which even doesn't have any
live search load seems to be taking 60 ms many times for the ping /
health check calls. Anyone has seen this before/suggestion what could be
wrong. The collection has 8 shards/3 replicas and 64GB memory and index
http://server1:8080/solr/COLL/select?indent=on&q=*:*&wt=json&rows=0'
{
"responseHeader":{
"zkConnected":true,
"status":0,
"QTime":600093,
"params":{
"q":"*:*",
"indent":&qu
out and just starting to drop all packets.
>
> Regards,
>Alex
>
> On Mon., Aug. 17, 2020, 6:22 p.m. Susheel Kumar,
> wrote:
>
> > Thanks for the all responses.
> >
> > Shawn - to your point both ping or select in between taking 600+ seconds
> to
> &g
Hello,
What does that mean by below. How do we set which cluster will act as
source or target at a time?
Both Cluster 1 and Cluster 2 can act as Source and Target at any given
point of time but a cluster cannot be both Source and Target at the same
time.
Also following the directions mentioned i
Not sure which configuration you are using but double check solrconfig.xml
to have entries like below and have below sr_mv_txt below in schema.xml for
storing and indexing.
true
ignored_
sr_mv_txt
Thnx
On Thu, Sep 19, 2019 at 11:02 PM PasLe Choix wrote:
> I am on Solr 7.7
equestHandler" >
>
> true
> ignored_
> _text_
>
>
>
> Is there anything wrong with it and how to fix it?
>
> Thank you.
>
> Pasle Choix
>
>
>
> On Mon, Sep 23, 2019 at 2:09 PM Susheel Kumar
> wrote:
>
>>
Hello,
I am trying to keep multiple versions of same document (empId,
empName,deptID,effectiveDt,empTitle..,..) with different effective dates
(composite key: deptID,empID,effectiveDt) but mark/ soft delete (deleted=Y)
the older ones and keep deleted=N for the latest one.
This way i can query the
401 - 417 of 417 matches
Mail list logo