For anyone interested, I realised that there are differences between the
various ways of using currency. Not all appear to support asymmetric rates,
and documents that do not need conversion are converted anyway, sometimes
introducing rounding error.
For point queries. e.g. fq=price_c:100,AUD
- t
Don’t worry, the way Hoss explained its indeed the way I’ve know that works,
but the example provided in the book pick my curiosity and hence the question
in this thread.
Regards,
On Sep 30, 2014, at 5:59 PM, Timothy Potter wrote:
> Indeed - Hoss is correct ... it's a problem with the example
The parsing of bq will be according to the main query parser (defType
parameter) or any localParam-specified query parser, as well as all the
other query parameters (q.op, mm, qf, etc.) This should be true for both
dismax and edismax. In theory, you could have the main query be parsed with
dism
The "+" signs in the parsed boost query indicated the terms were ANDed
together, but maybe you can use the q.op and mm parameters to change the
default operator (I forget!).
-- Jack Krupansky
-Original Message-
From: shamik
Sent: Tuesday, September 30, 2014 7:19 PM
To: solr-user@lucen
Thanks a lot Jack, makes sense. Just curios, if we used the following bq
entry in solrconfig xml
Source2:sfdc^6 Source2:downloads^5 Source2:topics^3
will it always be treated as an AND query ? Some of local results suggests
otherwise.
--
View this message in context:
http://lucene.472066.n3
Indeed - Hoss is correct ... it's a problem with the example in the
book ... my apologies for the confusion!
On Tue, Sep 30, 2014 at 3:57 PM, Chris Hostetter
wrote:
>
> : Thanks for the response, yes the way you describe I know it works and is
> : how I get it to work but then what does mean the
: Thanks for the response, yes the way you describe I know it works and is
: how I get it to work but then what does mean the snippet of the
: documentation I see on the documentation about overriding the default
It means that there is implicitly a set of search components that have
default b
A boost is basically an "OR" operation - it doesn't select any more or fewer
documents. So, three separate bq's are three OR terms. But your first bq is
a single query that ANDs three terms, and that AND-ed query is OR-ed with
the original query, so it only boosts documents that contain all thre
Hi,
I'm little confused with the right syntax of defining boost queries. If I
use them in the following way:
http://localhost:8983/solr/testhandler?q=Application+Manager&bq=(Source2:sfdc^6
Source2:downloads^5 Source2:topics^3)&debugQuery=true
it gets translated to -->
+Source2:sfd
Hi,
I'm little confused with the right syntax of defining boost queries. If I
use them in the following way:
http://localhost:8983/solr/testhandler?q=Application+Manager&bq=(Source2:sfdc^6
Source2:downloads^5 Source2:topics^3)&debugQuery=true
it gets translated to -->
+Source2:sfd
Just from a 20,000 ft. view, using the filterCache this way seems...odd.
+1 for using a different cache, but that's being quite unfamiliar with the
code.
On Tue, Sep 30, 2014 at 1:53 PM, Alan Woodward wrote:
>
> >
> >> Once all the facets have been gathered, the co-ordinating node then asks
> >
>
>> Once all the facets have been gathered, the co-ordinating node then asks
>> the subnodes for an exact count for the final top-N facets,
>
>
> What's the point to refine these counts? I've thought that it make sense
> only for facet.limit ed requests. Is it correct statement? can those who
Hi Ali,
May be you can leverage
Ahmet
On Sunday, September 28, 2014 10:25 PM, Ali Nazemian
wrote:
Dear all,
Hi,
I was wondering how can I implement solr boosting words from specific list
of important words? I mean I want to have a list of important words and
tell solr to score documents ba
Hi,
İf you want to avoid escaping madness, try prefix query parser.
https://cwiki.apache.org/confluence/display/solr/Other+Parsers#OtherParsers-PrefixQueryParser
q={!prefix f=proprietaryMessage_tis}:25:234&fq={!prefix
f=proprietaryMessage_tis}:32A:1302
Ahmet
On Tuesday, September 30, 2014 6
The special characters (colon) are treated as term delimiters for text
field. How do you really intend to query this "string". You could make it
simply a "string" field.
-- Jack Krupansky
-Original Message-
From: J'roo
Sent: Tuesday, September 30, 2014 11:08 AM
To: solr-user@lucene.a
Hello,
I already saw such discussion, but want to confirm.
On Tue, Sep 30, 2014 at 2:59 PM, Alan Woodward wrote:
> Once all the facets have been gathered, the co-ordinating node then asks
> the subnodes for an exact count for the final top-N facets,
What's the point to refine these counts? I'
Hi,
Does Lucene support syllabification of words out of the box? If so is there
support for brazilian portuguese? I'm trying to setup a readability score
for short text descriptions and this would be really helpful.
thanks,
--
Luis Carlos Guerrero
about.me/luis.guerrero
I have not tried it but I would check the option of using the synonymFilter
to duplicate certain query words . Anothe opt - you can detect these word
at index time (eg. UpdateProcessor) to give these documents a document
boost in case it fits your logic. Or even make a copy field that contains a
wh
Dear Koji,
Also would you please tell me how can I access the term frequency for each
word? Should I do a word count on content or Is it possible to have access
to reverse index information to make the process more efficient? I dont
want to add too much time to the time of indexing documents.
On T
https://gist.github.com/kindkid/c9f0ed9ee417064c1245
I'm using Solr 4.10.0, and getting a couple of error messages for
invalid complexphrase queries that I don't understand. Are these known
bugs or am I just doing something wrong?
Relevant portion of schema.xml...
Dear Koji,
Hi,
Thank you very much.
Do you know any example code for UpdateRequestProcessor? Anything would be
appreciated.
Best regards.
On Tue, Sep 30, 2014 at 3:41 AM, Koji Sekiguchi wrote:
> Hi Ali,
>
> I don't think Solr has such function OOTB. One way I can think of is that
> you can imple
Hi,
I ran into a problem with the Solr dismax query parser. We're using Solr
4.10.0 and the field types mentioned below are taken from the example
schema.xml.
In a test we have a document with rather strange content in a field
named "name_tokenized" of type "text_general":
abc_
(It's a te
Hi,
I am using Solr 3.5.0 with JavaClient SolrJ which I cannot change.
I have following type of docs:
:20:13-900-C05-P001:21:REF12349:25:23456789:32A:130202USD100,00:52A:/123456
I want to be able to find docs containing :25:234* AND :32A:1302* using
wildcards, which I thought to do like:
&q=
On 9/30/2014 4:38 AM, Charlie Hull wrote:
> We've just found a very similar issue at a client installation. They have
> around 27 million documents and are faceting on fields with high
> cardinality, and are unhappy with query performance and the server hardware
> necessary to make this performance
Hi all,
I'm using Solr 4.7.2 to implement multilingual search in my application.
I have a need to pass in query locale on search request and to choose
between custom tokenizers dynamically based on provided locale value.
In Solr In Action - Chapter 14 (Multilingual Search), Listing 14.9 -
*Index
A bit of digging show that the extra entries in the filter cache are added when
getting facets from a distributed search. Once all the facets have been
gathered, the co-ordinating node then asks the subnodes for an exact count for
the final top-N facets, and the path for executing this goes tho
Hi,
We've just found a very similar issue at a client installation. They have
around 27 million documents and are faceting on fields with high
cardinality, and are unhappy with query performance and the server hardware
necessary to make this performance acceptable. Last night we noticed the
filter
Or consider separating frequently changing data into a different core
from the slow moving data, if you can, reducing the amount of data being
pushed around.
Upayavira
On Mon, Sep 29, 2014, at 09:16 PM, Bryan Bende wrote:
> You can try lowering the mergeFactor in solrconfig.xml to cause more
> me
28 matches
Mail list logo