Y problem. What’s the use case you’re trying
> to support where you expect a shard’s number of live docs to drop to zero?
>
> Best,
> Erick
>
> > On Nov 30, 2020, at 4:57 AM, Pushkar Mishra
> wrote:
> >
> > Hi Solr team,
> >
> > I am using solr cloud.
but due to some complications we can't use. that .
> Let me explain the actual use case .
>
> We have limited space ,we can't keep storing the document for infinite
> time . So based on the customer's retention policy ,I need to delete the
> documents. And in this process i
Hi Erick,
It is implicit.
TTL thing I have explored but due to some complications we can't use. that .
Let me explain the actual use case .
We have limited space ,we can't keep storing the document for infinite
time . So based on the customer's retention policy ,I need to delet
making this much more difficult than you need to. Assuming
that the total number of documents remains relatively constant, you can just
let Solr take care of it all and not bother with trying to individually manage
shards by using the default compositeID routing.
If the number of docs increases you
requires you take complete control of where
>>> documents
>>> go, i.e. which shard they land on.
>>>
>>> This really sounds like an XY problem. What’s the use case you’re trying
>>> to support where you expect a shard’s number of live docs to drop to
>
s really sounds like an XY problem. What’s the use case you’re trying
>> to support where you expect a shard’s number of live docs to drop to zero?
>>
>> Best,
>> Erick
>>
>> > On Nov 30, 2020, at 4:57 AM, Pushkar Mishra
>> wrote:
>> >
>>
mber of live docs to drop to zero?
>
> Best,
> Erick
>
> > On Nov 30, 2020, at 4:57 AM, Pushkar Mishra
> wrote:
> >
> > Hi Solr team,
> >
> > I am using solr cloud.(version 8.5.x). I have a need to find out a
> > configuration where I can delete
This really sounds like an XY problem. What’s the use case you’re trying
to support where you expect a shard’s number of live docs to drop to zero?
Best,
Erick
> On Nov 30, 2020, at 4:57 AM, Pushkar Mishra wrote:
>
> Hi Solr team,
>
> I am using solr cloud.(version 8.5.x). I have a
Hi Solr team,
I am using solr cloud.(version 8.5.x). I have a need to find out a
configuration where I can delete a shard , when number of documents reaches
to zero in the shard , can some one help me out to achieve that ?
It is urgent , so a quick response will be highly appreciated .
Thanks
Solr isn’t meant to be public facing. Not sure how anyone would send these
commands since it can’t be reached from the outside world
> On Nov 12, 2020, at 7:12 AM, Sheikh, Wasim A.
> wrote:
>
> Hi Team,
>
> Currently we are facing the below vulnerability for Apache Solr tool. So can
> you
Hi Team,
Currently we are facing the below vulnerability for Apache Solr tool. So can
you please check the below details and help us to fix this issue.
/etc/init.d/solr-master version
Server version: Apache Tomcat/7.0.62
Server built: May 7 2015 17:14:55 UTC
Server number: 7.0.62.0
OS Name: Lin
Can someone help on the above pls??
On Sat, Oct 17, 2020 at 6:22 AM yaswanth kumar
wrote:
> Using Solr 8.2; Zoo 3.4; Solr mode: Cloud with multiple collections; Basic
> Authentication: Enabled
>
> I am trying to run the
>
> export JAVA_OPTS="-Djavax.net.ssl.trustStore=etc/solr-keystore.jks
> -Dj
Using Solr 8.2; Zoo 3.4; Solr mode: Cloud with multiple collections; Basic
Authentication: Enabled
I am trying to run the
export JAVA_OPTS="-Djavax.net.ssl.trustStore=etc/solr-keystore.jks
-Djavax.net.ssl.trustStorePassword=solrssl
-Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.im
In addition to the insightful pointers by Zisis and Erick, I would like to
mention an approach in the link below that I generally use to pinpoint
exactly which threads are causing the CPU spike. Knowing this you can
understand which aspect of Solr (search thread, GC, update thread etc) is
taking mo
Zisis makes good points. One other thing is I’d look to
see if the CPU spikes coincide with commits. But GC
is where I’d look first.
Continuing on with the theme of caches, yours are far too large
at first glance. The default is, indeed, size=512. Every time
you open a new searcher, you’ll be exe
The values you have for the caches and the maxwarmingsearchers do not look
like the default. Cache sizes are 512 for the most part and
maxwarmingsearchers are 2 (if not limit them to 2)
Sudden CPU spikes probably indicate GC issues. The # of documents you have
is small, are they huge documents? T
nt hits to them for
searching, so do I need to consider increasing the above sizes to get down
the CPU's and see more stable solr cloud?
--
Thanks & Regards,
Yaswanth Kumar Konathala.
yaswanth...@gmail.com
ypt(CryptoKeys.java:323)
~[solr-core-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe -
ivera - 2019-07-19 15:11:04]
Need to understand what these errors are about? and is there any way to
remediate these errors?
--
Thanks & Regards,
Yaswanth Kumar Konathala.
yaswanth...@gmail.com
@lucene.apache.org
Subject: Re: Need to update SOLR_HOME in the solr service script and getting
errors
On Wed, Sep 16, 2020 at 02:59:32PM +, Victor Kretzer wrote:
> My setup is two solr nodes running on separate Azure Ubuntu 18.04 LTS vms
> using an external zookeeper assembly.
> I installed S
On Wed, Sep 16, 2020 at 02:59:32PM +, Victor Kretzer wrote:
> My setup is two solr nodes running on separate Azure Ubuntu 18.04 LTS vms
> using an external zookeeper assembly.
> I installed Solr 6.6.6 using the install file and then followed the steps for
> enabling ssl. I am able to start so
My setup is two solr nodes running on separate Azure Ubuntu 18.04 LTS vms using
an external zookeeper assembly.
I installed Solr 6.6.6 using the install file and then followed the steps for
enabling ssl. I am able to start solr, add collections and the like using
bin/solr script.
Example:
/opt/
Maybe your problems are in AWS land.
> On May 22, 2020, at 3:45 AM, Modassar Ather wrote:
>
> Thanks Erick and Phill.
>
> We index data weekly once and that is why we do the optimisation and it has
> helped in faster query result. I will experiment with a fewer segments with
> the current hard
Thanks Erick and Phill.
We index data weekly once and that is why we do the optimisation and it has
helped in faster query result. I will experiment with a fewer segments with
the current hardware.
The thing I am not clear about is although there is no constant high usage
of extra IOPs other than
The optimal size for a shard of the index is be definition what works best on
the hardware with the JVM heap that is in use.
More shards mean smaller sizes of the index for the shard as you already know.
I spent months changing the sharing, the JVM heap, the GC values before taking
the system l
Please consider _not_ optimizing. It’s kind of a misleading name anyway, and the
version of solr you’re using may have unintended consequences, see:
https://lucidworks.com/post/segment-merging-deleted-documents-optimize-may-bad/
and
https://lucidworks.com/post/solr-and-optimizing-your-index-take-i
Thanks Shawn for your response.
We have seen a performance increase in optimisation with a bigger number of
IOPs. Without the IOPs we saw the optimisation took around 15-20 hours
whereas the same index took 5-6 hours to optimise with higher IOPs.
Yes the entire extra IOPs were never used to full o
Thanks Phill for your response.
Optimal Index size: Depends on what you are optimizing for. Query Speed?
Hardware utilization?
We are optimising it for query speed. What I understand even if we set the
merge policy to any number the amount of hard disk will still be required
for the bigger segment
On 5/20/2020 11:43 AM, Modassar Ather wrote:
Can you please help me with following few questions?
- What is the ideal index size per shard?
We have no way of knowing that. A size that works well for one index
use case may not work well for another, even if the index size in both
cases i
In my world your index size is common.
Optimal Index size: Depends on what you are optimizing for. Query Speed?
Hardware utilization?
Optimizing the index is something I never do. We live with about 28% deletes.
You should check your configuration for your merge policy.
I run 120 shards, and I
In my world your index size is common.
Optimal Index size: Depends on what you are optimizing for. Query Speed?
Hardware utilization?
Optimizing the index is something I never do. We live with about 28% deletes.
You should check your configuration for your merge policy.
I run 120 shards, and I
Hi,
Currently we have index of size 3.5 TB. These index are distributed across
12 shards under two cores. The size of index on each shards are almost
equal.
We do a delta indexing every week and optimise the index.
The server configuration is as follows.
- Solr Version : 6.5.1
- AWS insta
Relevance scoring has indeed changed since Solr 6 from the tf/idf vector
model to Okapi BM25.
You will need to set the similarity to ClassicSimilarityFactory in the
schema.
Consult the reference guide[1] how to do it.
[1]
https://lucene.apache.org/solr/guide/8_4/other-schema-elements.html
Hello Team,
How are you? This is Karthik Reddy and I am working as a Software
Developer. I have one question regarding SOLR scores. One of the projects,
which I am working on we are using Lucene Apache SOLR.
We were using SOLR 5.4.1 initially and then migrated to SOLR 8.4.1. After
migration, I do
HI Seetesh
For IndexBasedSpellchecker default distanceMeasure is LevensteinDistance
itself . Thats why it is commented in the Reference Guide
regards
Kumar Gaurav
On Tue, Jan 28, 2020 at 1:01 PM seeteshh wrote:
> Hello Kumar Gaurav
>
> For IndexBasedSpellchecker is there a better option of us
My searchComponent is as follows
text_general
default
name
solr.DirectSolrSpellChecker
internal
0.5
2
1
5
4
0.01
wordbreak
solr.WordBreakSolrSpellChecker
name
Hello Kumar Gaurav
For IndexBasedSpellchecker is there a better option of using
org.apache.lucene.search.spell.LevensteinDistance as this is not valid in
Solr 8.4
This line seems to be commented in the Reference Guide
Regards,
Seetesh Hindlekar
-
Seetesh Hindlekar
--
Sent from: https://l
Hello, Kumar.
I don't know. 3 / 84 ratio seems reasonable. The only unknown part of the
equation was that {!simpleFilter}. Anyway, profiler/sampler might get exact
answer.
On Fri, Jan 24, 2020 at 8:55 AM kumar gaurav wrote:
> HI Mikhail
>
> Can you please see above debug log and help ?
>
> Than
HI Mikhail
Can you please see above debug log and help ?
Thanks
On Thu, Jan 23, 2020 at 12:05 AM kumar gaurav wrote:
> Also
>
> its not looks like box is slow . because for following query prepare time
> is 3 ms but facet time is 84ms on the same box .Don't know why prepare time
> was huge fo
Also
its not looks like box is slow . because for following query prepare time
is 3 ms but facet time is 84ms on the same box .Don't know why prepare time
was huge for that example :( .
debug:
{
- rawquerystring:
"{!parent tag=top which=$pq filters=$child.fq score=max v=$cq}",
- queryst
Lots of thanks Mikhail.
Also can you please answer - Should i use docValues="true" for _root_
field to improve this json.facet performance ?
On Wed, Jan 22, 2020 at 11:42 PM Mikhail Khludnev wrote:
> Initial request refers unknown (to me) query parser {!simpleFilter, I
> can't comment on it.
>
Initial request refers unknown (to me) query parser {!simpleFilter, I
can't comment on it.
Parsing queries took in millis: - time: 261, usually prepare for query
takes a moment. I suspect the box is really slow per se or encounter heavy
load.
And then facets took about 6 times more - facet_module
HI Mikhail
Here is full debug log . Please have a look .
debug:
{
- rawquerystring:
"{!parent tag=top which=$pq filters=$child.fq score=max v=$cq}",
- querystring:
"{!parent tag=top which=$pq filters=$child.fq score=max v=$cq}",
- parsedquery:
"AllParentsAware(ToParentBlockJoin
Screenshot didn't come though the list. That excerpt doesn't have any
informative numbers.
On Tue, Jan 21, 2020 at 5:18 PM kumar gaurav wrote:
> Hi Mikhail
>
> Thanks for your reply . Please help me in this .
>
> Followings are the screenshot:-
>
> [image: image.png]
>
>
> [image: image.png]
>
>
HI Mikhail
Can you please help ?
On Tue, Jan 21, 2020 at 7:48 PM kumar gaurav wrote:
> Hi Mikhail
>
> Thanks for your reply . Please help me in this .
>
> Followings are the screenshot:-
>
> [image: image.png]
>
>
> [image: image.png]
>
>
> json facet debug Output:-
>
> json:
> {
>
>- facet
Can you share spellcheck component and handler which you have used ?
On Mon, Jan 20, 2020 at 3:35 PM seeteshh wrote:
> Hello all,
>
> I am not able to check and test the spell check feature in Apache solr 8.4
>
> Tried multiple examples including
>
>
> https://examples.javacodegeeks.com/enterpri
Hi Mikhail
Thanks for your reply . Please help me in this .
Followings are the screenshot:-
[image: image.png]
[image: image.png]
json facet debug Output:-
json:
{
- facet:
{
- color_refine:
{
- domain:
{
- excludeTags: "rassortment,top,top2,
Hi.
Can you share debugQuery=true output?
On Tue, Jan 21, 2020 at 1:37 PM kumar gaurav wrote:
> HI
>
> i have a parent child query in which i have used json facet for child
> faceting like following.
>
> qt=/dismax
> matchAllQueryRef1=+(+({!query v=$cq}))
> sq=+{!lucene v=$matchAllQueryRef1}
> q
HI
i have a parent child query in which i have used json facet for child
faceting like following.
qt=/dismax
matchAllQueryRef1=+(+({!query v=$cq}))
sq=+{!lucene v=$matchAllQueryRef1}
q={!parent tag=top which=$pq filters=$child.fq score=max v=$cq}
child.fq={!tag=rcolor_refine}filter({!term f=color
Hello all,
I am not able to check and test the spell check feature in Apache solr 8.4
Tried multiple examples including
https://examples.javacodegeeks.com/enterprise-java/apache-solr/solr-spellcheck-example/
However I am not getting any results
Regards,
Seetesh Hindlekar
--
Sent from: htt
Hi Robert,
How does this work?
{!frange l=0 u=5} sum(geodist())
On Fri, 3 Jan 2020 at 21:10, Robert Scavilla wrote:
> Thank yo in advance for your help.
>
> I need to get the sum of a pivot range field. The following query uses the
> stats function to sum the *sumField* values
or whole result, But we need max and min value per day basis.
Untitled31.png
When we try to get max and min value per day basis, we are able to
fetch either min or max using following query.
*/&group.sort=event1 desc or &group.sort=event1 asc/*
*/
/*
Untitled6.png
But we need both min a
> 2019-12-12T23:59:59Z]
>
> Using Apache solr statistics option, we are able to calculate max and min for
> whole result, But we need max and min value per day basis.
>
>
> When we try to get max and min value per day basis, we are able to fetch
> either min or max using f
2019-12-11T00:00:00Z TO
2019-12-11T23:59:59Z]&group.query=eventTimeStamp:[2019-12-12T00:00:00Z TO
2019-12-12T23:59:59Z]*
Using Apache solr statistics option, we are able to calculate max and min
for whole result, But we need max and min value per day basis.
[image: Untitled31.png]
When we try to
Thank yo in advance for your help.
I need to get the sum of a pivot range field. The following query uses the
stats function to sum the *sumField* values. I'm trying to sum the same
field in a frange subquery and I don't know how.
/select?defType=edismax&q=*:*&fq={!geofilt}&am
Why are you using text field for location? You must use the proper field type.
You need to follow the instructions in the “spatial search” section of
the reference guide, here’s the ref guide for Solr 7:
https://lucene.apache.org/solr/guide/7_7/spatial-search.html
Best,
Erick
> On Dec
I have 100 documents into Solr, type of location field is
*org.apache.solr.schema.TextField.*
I am unable to run any query to search nearby points with reference to that
field.
So if you can help into it or provide some program reference in JAVA with
same kind of implementation.
Thanks,
Niraj
That’s a little overstated, a full explanation of what’s safe and what’s not is
several pages and depends on what you mean by “safe”.
Any modification to a schema, even if they don’t cause something to outright
break, may leave the index in an inconsistent state. For instance, remember
that Luc
Hi all,
I have question about the managed schema functionality. According to the
docs, "All changes to a collection’s schema require reindexing". This would
imply that if you use a managed schema and you use the schema API to update
the schema, then doing a full re-index is necessary each time.
Almost certainly. You can recreate all the “state.json” znodes “by hand”, but
that’ll be very, very difficult to get right.
It’s possible you have ZK snapshot laying around you can restore, you’ll have
to look.
Best,
Erick
> On Nov 19, 2019, at 6:33 AM, vishal patel
> wrote:
>
> I have cre
I have created 2 shards of Solr 8.3.0. After I have created 10 collections and
also re-indexed data.
Some fields are changed in one collection. I deleted a version-2 folder from
zoo_data and up config that collection.
Is it necessary to create all collections again? Also indexing data again?
R
The LTS idea I believe comes from the solr downloads page where 7.7.x is
designated as LTS. https://lucene.apache.org/solr/downloads.html
On Wed, Nov 13, 2019 at 9:41 AM Shawn Heisey wrote:
> On 11/6/2019 9:58 AM, suyog joshi wrote:
> > So we can say its better to go with latest stable version (
On 11/6/2019 9:58 AM, suyog joshi wrote:
So we can say its better to go with latest stable version (8.x) instead of
7.x, which is LTS right now, but can soon become EOL post launching of 9.x
sometime early next year.
I don't know where you got the idea that 7.x is LTS ... but I do not
think th
Sure, Thanks Erick for quick reply as always :)
Regards,
Suyog Joshi
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Please read through the release notes and the Solr and Lucene CHANGES.txt files
then ask specific questions.
Best,
Erick
> On Nov 8, 2019, at 4:10 AM, suyog joshi wrote:
>
> Hi Erik/Team,
>
> Thanks for your help in previous query. Just have one other doubt, can you
> please assist on it ?
>
Hi Erik/Team,
Thanks for your help in previous query. Just have one other doubt, can you
please assist on it ?
Q - Are there any major differences between current LTS version(7.7.x) and
latest stable releases (8.x.x) in terms of security, stability, logging,
monitoring, authentication etc ?
Any
Pretty much correct. The only change I’d make is that 7x is not actively being
supported in the sense that only seriously critical bugs will be addressed.
You’ll note that the last release of 7x was 7.7.2 in early June. Increased
functionality, speedups, etc won’t be back-ported.
So I can’t
Hi Erick,
Thank you so much for sharing detailed information, indeed its really
helpful for us to plan out the things. Really appreciate your guidance.
So we can say its better to go with latest stable version (8.x) instead of
7.x, which is LTS right now, but can soon become EOL post launching of
It’s variable. The policy is that we try very hard to maintain one major
version back-compat. So generally, if you start with, say, 7x upgrading to 8x
should be relatively straightforward. However, you will _not_ be able to
upgrade from 7x to 9x, you must re-index everything from scratch.
The d
Hi Team,
Can you please guide us on below queries for solr versions ?
1. Are there any major differences (for security, platform stability etc)
between current LTS and Stable Solr version ?
2. How long a version remains in LTS before becoming EoL ?
3. How frequently LTS version gets changed ?
3.
budsvagten.dk
> >> > > Web: www.udbudsvagten.dk
> >> > > Parken - Tårn D - 5. Sal
> >> > > Øster Allé 48 | DK - 2100 København
> >> > >
> >> > > <http://dk.linkedin.com/in/JonKjaerAmundsen/>
> >> >
gt; <http://www.linkedin.com/groups?groupDashboard=&gid=1862353> *
>> > >
>> > >
>> > > Den ons. 16. okt. 2019 kl. 17.42 skrev Koen De Groote <
>> > > koen.degro...@limecraft.com>:
>> > >
>> > > > I'm trying to restore a couple of collections, and 1 keeps feeling.
>> > This
>> > > > happens to be the only one who's leader isn't on the host that the
>> > backup
>> > > > was taken from.
>> > > >
>> > > >
>> > > > The backup was done on server1, for all collections.
>> > > >
>> > > > For this collection that is failing, the Leader was on server2. All
>> > other
>> > > > collections had their leader on server1. All collections had 1
>> replica,
>> > > on
>> > > > the other server.
>> > > >
>> > > > I would think that having the replica there would be enough to
>> perform
>> > a
>> > > > restore.
>> > > >
>> > > > Or does the backup need to happen on the actual leader?
>> > > >
>> > > > Kind regards,
>> > > > Koen De Groote
>> > > >
>> > >
>> >
>>
>
...@limecraft.com>:
> > >
> > > > I'm trying to restore a couple of collections, and 1 keeps feeling.
> > This
> > > > happens to be the only one who's leader isn't on the host that the
> > backup
> > > > was taken from.
> > > >
> > > >
> > > > The backup was done on server1, for all collections.
> > > >
> > > > For this collection that is failing, the Leader was on server2. All
> > other
> > > > collections had their leader on server1. All collections had 1
> replica,
> > > on
> > > > the other server.
> > > >
> > > > I would think that having the replica there would be enough to
> perform
> > a
> > > > restore.
> > > >
> > > > Or does the backup need to happen on the actual leader?
> > > >
> > > > Kind regards,
> > > > Koen De Groote
> > > >
> > >
> >
>
ng to restore a couple of collections, and 1 keeps feeling.
> This
> > > happens to be the only one who's leader isn't on the host that the
> backup
> > > was taken from.
> > >
> > >
> > > The backup was done on server1, for all collections.
&
he host that the
>> backup
>> > was taken from.
>> >
>> >
>> > The backup was done on server1, for all collections.
>> >
>> > For this collection that is failing, the Leader was on server2. All
>> other
>> > collections had their leader on server1. All collections had 1 replica,
>> on
>> > the other server.
>> >
>> > I would think that having the replica there would be enough to perform a
>> > restore.
>> >
>> > Or does the backup need to happen on the actual leader?
>> >
>> > Kind regards,
>> > Koen De Groote
>> >
>>
>
e host that the backup
> > was taken from.
> >
> >
> > The backup was done on server1, for all collections.
> >
> > For this collection that is failing, the Leader was on server2. All other
> > collections had their leader on server1. All collections had 1 r
collections had 1 replica, on
> the other server.
>
> I would think that having the replica there would be enough to perform a
> restore.
>
> Or does the backup need to happen on the actual leader?
>
> Kind regards,
> Koen De Groote
>
ver2. All other
collections had their leader on server1. All collections had 1 replica, on
the other server.
I would think that having the replica there would be enough to perform a
restore.
Or does the backup need to happen on the actual leader?
Kind regards,
Koen De Groote
The NOT operator isn’t a Boolean NOT, so it requires some care, Chris Hostetter
wrote a good blog about that. Try
q=*:* -(:*c*
The query q=-something really isn’t valid syntax, but some query parsers help
you out by silently putting the *:* in front of it. that’s not guaranteed
across all pars
Hi,
I am facing issue while working with solr streamimg expression. I am using
/export for emiting tuples out of streaming query.Howver when I tried to use
not operator in solr query it is not working.The same is working with /select.
Please find the below query
top(n=105,search(,qt="/expo
ading space after 'fq'. This is a syntax parsing gotcha that
has to do with how embedded queries are parsed, which is what you need to
do as you need to compose two with an operator. It'd be kinda awkard to
fix that gotcha in Solr. There are other techniques too, but this is th
Thanks,
Could you please help me in combining two geofilt fqs as the following gives
error, it treats ")" as part of the d parameter and gives error that 'd=80)'
is not a valid param:
({!geofilt}&sfield=adminLatLon&pt=33.0198431,-96.6988856&d=80)+OR+({!geofilt}&sfield=adminLatLon&pt=50.2171726,8
"sort" is a regular request parameter. In your non-working query, you
specified it as a local-param inside geofilt which isn't where it belongs.
If you want to sort from two points then you need to make up your mind on
how to combine the distances into some greater aggregate fun
ork.
>
> Please help.
>
>
> Best regards,
> Anushka gupta
>
>
>
> From: David Smiley
> Sent: Friday, September 13, 2019 10:29 PM
> To: Anushka Gupta
> Subject: [EXT]Re: Need urgent help with Solr spatial search using
> SpatialRecursivePrefixTreeFi
ine the two FQs then sorting doesn’t work.
Please help.
Best regards,
Anushka gupta
From: David Smiley
Sent: Friday, September 13, 2019 10:29 PM
To: Anushka Gupta
Subject: [EXT]Re: Need urgent help with Solr spatial search using
SpatialRecursivePrefixTreeFieldType
Hello,
Please don
In addition to all the valuable information already shared I am curious to
understand why you think the results are unreliable.
Most of the times is the parameters that cause to ignore some of the terms
of the original document/corpus (as simple of the min/max document frequency
to consider or min
e.org, Rajeev Kasarabada1 <
> kasar...@in.ibm.com>, Archana Gavini1
> Date:13/09/2019 04:32 PM
> Subject: [EXTERNAL] Re: Need more info on MLT (More Like This)
> feature
> --
>
>
>
> To use knnSearch, you need to submit a P
atya Pyla
> Cc:solr-user@lucene.apache.org, Rajeev Kasarabada1
> , Archana Gavini1
> Date:13/09/2019 04:32 PM
> Subject: [EXTERNAL] Re: Need more info on MLT (More Like This) feature
>
>
>
> To use knnSearch, you need to submit a POST request to the S
, AP 530045
India
From: Chee Yee Lim
To: Srisatya Pyla
Cc: solr-user@lucene.apache.org, Rajeev Kasarabada1
, Archana Gavini1
Date: 13/09/2019 04:32 PM
Subject:[EXTERNAL] Re: Need more info on MLT (More Like This)
feature
To use knnSearch, you need to submit a POST
To use knnSearch, you need to submit a POST request to the Stream request
handler.
Using your example query, you will need to rewrite them from this :
*http://[SOLR*
URL]/mlt?q=sjkey:1414462-25600-5258&wt=json&indent=true&mlt=true&rows=100&mlt.fl=jobdescription&ml
-
From: Chee Yee Lim
To: solr-user@lucene.apache.org
Cc: Archana Gavini1 , Rajeev Kasarabada1
Subject: [EXTERNAL] Re: Need more info on MLT (More Like This) feature
Date: Thu, Sep 12, 2019 6:43 PM
I've been working with MLT handler (Solr 8.1.1) by calling it the same way
you did, http://[SOL
I've been working with MLT handler (Solr 8.1.1) by calling it the same way
you did, http://[SOLR URL]/mlt. But the response is very unreliable with
90% of the same queries resulting in Java null pointer exception, and only
10% returning expected response. I do not know what is the cause of this.
I
Hi Solr Seatch Team,
I am a developer from IBM Kenexa Brassring. We are using Solr Search
engine for searching jobs in our applications.
We are planning to use MLT feature to get the similar matching documents
(jobs) based on one document (job).
When trying to explore this option, we are using
Hi,
I am getting the below log very frequently and I can't find more details
about it.
ZKPropertiesWriter Could not read DIH properties from
/configs//dataimport.properties :class
org.apache.zookeeper.KeeperException$NoNodeException
Details:
We have a Solr cluster containing 2 Solr node
On 5/3/2019 12:52 AM, Salmaan Rashid Syed wrote:
I say that the nodes are limited to 4 because when I launch Solr in cloud
mode, the first prompt that I get is to choose number of nodes [1-4]. When
I tried to enter 7, it says that they are more than 4 and choose a smaller
number.
That's the clo
This is just the setup for an experimental cluster (generally it does also not
make sense to have many instances on the same server). Once you have got more
experience take a look at
https://lucene.apache.org/solr/guide/7_7/taking-solr-to-production.html
To see how to set up clusters.
> Am 03.
Thanks Jorn for your reply.
I say that the nodes are limited to 4 because when I launch Solr in cloud
mode, the first prompt that I get is to choose number of nodes [1-4]. When
I tried to enter 7, it says that they are more than 4 and choose a smaller
number.
*Thanks and Regards,*
Salmaan Rashid
Thanks Walter,
Since I am new to Solr and by looking at your suggestion, it looks like I
am trying to do something very complicated and out-of-box capabilities of
Solr. I really don't want to do that.
I am not from Computer Science background and my specialisation is in
Analytics and AI.
Let me
BTW why do you think that SolrCloud is limited to 4 nodes? More are for sure
possible.
> Am 03.05.2019 um 07:54 schrieb Salmaan Rashid Syed
> :
>
> Hi Solr Users,
>
> I am using Solr 7.6 in cloud mode with external zookeeper installed at
> ports 2181, 2182, 2183. Currently we have only one ser
You can have dedicarse clusters per Client and/or you can protect it via
Kerberos or Basic Auth or write your own authorization plugin based on OAuth.
I am not sure why you want to offer this on different ports to different
clients.
> Am 03.05.2019 um 07:54 schrieb Salmaan Rashid Syed
> :
>
>
The best option is to run all the collections at the same port. Intra-cluster
communication cannot be split over multiple ports, so this would require big
internal changes to Solr. And what about communication that does not belong to
a collection, like electing an overseer node?
Why do you want
1 - 100 of 1294 matches
Mail list logo