Hi,
For deep pagination, it is recommended that we use cursorMark and provide a
sort order for as a tiebreaker.
I want my results in relevancy order and so have no sort specified on my query
by default.
Do I need to explicitly set :
sort : score desc, asc
Or can I get away with jus
Hi,
Which version of Solr are you using?
And do you have the error log for your error?
Regards,
Edwin
On Mon, 16 Jul 2018 at 21:20, Akshay Patil wrote:
> Hi
>
> I am student. for my master thesis I am working on the Learning To rank. As
> I did research on it. I found solution provided by the
Hi Mikhail,
Thank you for suggesting to use json facet. I tried json.facet, it works
great and I am able to make a single query instead of two. Now I am planning
to get rid of the duplicate child fields in parent docs. However I ran into
problems while forming negative queries with block join.
He
Thank you. I'll try the child doc transformer.
On a related question, if I delete a parent document, will its children be
deleted also? Or do I have to have a parent_id field in each child so that the
child docs can be deleted?
On 7/22/18 10:05 AM, Mikhail Khludnev wrote:
Hello,
Check [chil
: We are using Solr as a user index, and users have email addresses.
:
: Our old search behavior used a SQL substring match for any search
: terms entered, and so users are used to being able to search for e.g.
: "chr" and finding my email address ("ch...@christopherschultz.net").
:
: By defaul
: > defType=edismax q=sysadmin name:Mike qf=title text last_name
: > first_name
:
: Aside: I'm curious about the use of "qf", here. Since I didn't want my
: users to have to specify any particular field to search, I created an
: "all" field and dumped everything into it. It seems like it would
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Chris,
On 7/24/18 1:40 PM, Chris Hostetter wrote:
>
> : So if I want to alias the "first_name" field to "first" and the :
> "last_name" field to "last", then I would ... do what, exactly?
>
> se the last example here...
>
> https://lucene.apache.
: Chris, I was trying the below method for sorting the faceted buckets but
: am seeing that the function query query($q) applies only to the score
: from “q” parameter. My solr request has a combination of q, “bq” and
: “bf” and it looks like the function query query($q) is calculating the
: s
: So if I want to alias the "first_name" field to "first" and the
: "last_name" field to "last", then I would ... do what, exactly?
se the last example here...
https://lucene.apache.org/solr/guide/7_4/the-extended-dismax-query-parser.html#examples-of-edismax-queries
defType=edismax
q=sysadmin
Dear Erick,
Unfortunately I deleted the original Solr logs, so I couldn't post it here.
But removing the hard commit from the loop solved my problem and made
indexing faster. Now there are no errors thrown from the client side.
Thanks
Arunan
On 22 July 2018 at 04:45, Erick Erickson wrote:
> c
1. the standard way to do this is to use ngrams. The index is larger,
but it gives you much quicker searches than trying to to
pre-and-postfix wildcards
2. use a fieldType with KeywordTokenizerFactory + (probably)
LowerCaseFilterFactory + TrimFilterFactory. And, in your case,
NGramTokenizerFactory
Thanks, we feel confident we will not need the optimization for our
circumstances and just remove the code. Appreciated the response!
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Emir,
On 3/6/18 2:42 AM, Emir Arnautović wrote:
> I did not try it, but the first thing that came to my mind is to
> use edismax’s ability to define field aliases, something like
> f.f1.fq=field_1. Note that it is not recommended to have field
> na
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Rick,
On 3/6/18 6:39 PM, Rick Leir wrote:
> The first thing that came to mind is that you are planning not to
> have an app in front of Solr. Without a web app, you will need to
> trust whoever can get access to Solr. Maybe you are on an
> intrane
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
All,
We are using Solr as a user index, and users have email addresses.
Our old search behavior used a SQL substring match for any search
terms entered, and so users are used to being able to search for e.g.
"chr" and finding my email address ("ch.
Hi. I'm wondering the same. Some "updateBean" takes a very long time, up to
130 000 ms. Typically it takes around 100 ms. I'm using 2 threads and a
queue size of 30. Haven't figured out what the default thread size is? 0?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Does the optimize actually fail or just take a long time? That is, if
you wait does the index eventually get down to one segment?
For long-running operations, the _request_ can time out even though
the action is still continuing.
But that brings up whether you should optimize in the first place.
O
Hi Amanda,
> do all I need to do is modify the settings from smartChinese to the ones
you posted here
Yes, the settings I posted should work for you, at least partially.
If you are happy with the results, it's OK!
But please take this as a starting point because it's not perfect.
> Or do I need
Thank you, Shalin.
Here is the Jira https://issues.apache.org/jira/browse/SOLR-12585
On Mon, Jul 23, 2018 at 11:21 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Can you please open a Jira issue? I don't think we handle DNS problems very
> well during startup. Thanks.
>
> On Tue,
Hi Tomoko,
Thanks so much for this explanation - I did not even know this was
possible! I will try it out but I have one question: do all I need to do is
modify the settings from smartChinese to the ones you posted here:
Or do I need to still do something with the SmartChineseAnalyzer
Hi,
We have recently been performing a bulk reindexing against a large database
of ours. At the end of reindexing all documents we successfully perform a
CloudSolrClient.commit(). The entire reindexing process takes around 9
hours. This is solr 7.3, by the way..
Anyway, immediately after the comm
Hi,
> Am 15.06.2018 um 14:54 schrieb Christian Spitzlay
> :
>
>
>> Am 15.06.2018 um 01:23 schrieb Joel Bernstein :
>>
>> We have to check the behavior of the innerJoin. I suspect that its closing
>> the second stream when the first stream his finished. This would cause a
>> broken pipe with t
Hi all,
We try to use SOLR cloud in openshift. We manage our Solr by StatefulSet.
All SOLR functionalities work good except indexing.
We index our docs from HADOOP by SolrJ jar that try to index to specific
Pod, but openshift blocks access to internal Pods.
In my case, separate service for extern
23 matches
Mail list logo