Hi Samik,
Please see parameter of edismax.
https://cwiki.apache.org/confluence/display/solr/The+Extended+DisMax+Query+Parser
If lowercaseOperators=true then and is treated as AND. Also stopwords parameter
could be used.
Stopwords and edismax had issues (when mm=100%) in history. Not sure curren
Just a side note. Sidecar index might be really useful for updating blocked
docs, but it's in experimenting stage iirc.
http://www.lucenerevolution.org/2013/Sidecar-Index-Solr-Components-for-Parallel-Index-Management
On Wed, Feb 19, 2014 at 10:42 AM, Mikhail Khludnev <
mkhlud...@griddynamics.com>
Hello,
I'm migrating to solr 4.6.1 and have problems with the ICUCollationField
(apache-solr-ref-guide-4.6.pdf, pp. 31 and 100).
I get consistently the error message
Error loading class 'solr.ICUCollationField'.
even after
INFO: Adding 'file:/srv/solr4.6.1/contrib/analysis-extras/lib/icu4j-49.1
you need the solr analysis-extras jar in your classpath, too.
On Wed, Feb 19, 2014 at 6:45 AM, Thomas Fischer wrote:
> Hello,
>
> I'm migrating to solr 4.6.1 and have problems with the ICUCollationField
> (apache-solr-ref-guide-4.6.pdf, pp. 31 and 100).
>
> I get consistently the error message
Hello Robert,
I already added
contrib/analysis-extras/lib/
and
contrib/analysis-extras/lucene-libs/
via lib directives in solrconfig, this is why the classes mentioned are loaded.
Do you know which jar is supposed to contain the ICUCollationField?
Best regards
Thomas
Am 19.02.2014 um 13:54 sc
you need the solr analysis-extras jar itself, too.
On Wed, Feb 19, 2014 at 8:25 AM, Thomas Fischer wrote:
> Hello Robert,
>
> I already added
> contrib/analysis-extras/lib/
> and
> contrib/analysis-extras/lucene-libs/
> via lib directives in solrconfig, this is why the classes mentioned are
>
Simply add the lowecaserOperators=false parameter or add it to the
"defaults" section of the request handler in solrconfig, and then "and" will
not be treated as "AND".
The wiki is confusing - it shouldn't be advising you how to set the
parameter to achieve the default setting! Rather, it shou
Hi Shaw,
Thanks for your answer.
Actually we haven't performance problem because we do only select request.
We have 4 CPUs 8cores 24Go Ram.
I know how to create alias, my question was just concerning performance,
and you have right,
impossible to answer to this question without more informatio
Thanks, that helps!
I'm trying to migrate from the now deprecated ICUCollationKeyFilterFactory I
used before to the ICUCollationField.
Is there any description how to achieve this?
First tries now yield
ICUCollationField does not support specifying an analyzer.
which makes it complicated since
Hmm, for standardization of text fields, collation might be a little
awkward.
For your german umlauts, what do you mean by standardize? is this to
achieve equivalency of e.g. oe to ö in your search terms?
In that case, a simpler approach would be to put
GermanNormalizationFilterFactory in your ch
> Hmm, for standardization of text fields, collation might be a little
> awkward.
I arrived there after using custom rules for a while (see "RuleBasedCollator"
on http://wiki.apache.org/solr/UnicodeCollation) and then being told
"For better performance, less memory usage, and support for more lo
On Wed, Feb 19, 2014 at 10:33 AM, Thomas Fischer wrote:
>
> > Hmm, for standardization of text fields, collation might be a little
> > awkward.
>
> I arrived there after using custom rules for a while (see
> "RuleBasedCollator" on http://wiki.apache.org/solr/UnicodeCollation) and
> then being tol
Hello everybody,
I'm using Solr 4.6.1. and I'd like to know if there's a way to determine
exactly the number of characters of a fragment used in highlights. If I use
hl.fragsize=70 the length of the fragments that I get is variable (often)
and I get results of 90 characters length.
Regards and th
I believe that there's a configuration option that'll make on-deck searchers be
used if they're needed even if they're not fully warmed yet. You might try that
option and see if it doesn't solve your 503 errors.
Thanks,
Greg
On Feb 18, 2014, at 9:05 PM, Erick Erickson wrote:
> Colin:
>
> Sto
Hi Juan,
Are you counting number of characters of html rendered snippet?
I think pre and post strings (html markup which are not displayed) are causing
that difference.
Ahmet
On Wednesday, February 19, 2014 5:53 PM, Juan Carlos Serrano
wrote:
Hello everybody,
I'm using Solr 4.6.1. and I'd
On 19/02/14 07:57, Vineet Mishra wrote:
Thanks for all your response but my doubt is which *Server:Port* should the
query be made as we don't know the crashed server or which server might
crash in the future(as any server can go down).
That is what CloudSolrServer will deal with for you. It knows
On 2/19/2014 8:59 AM, Greg Walters wrote:
I believe that there's a configuration option that'll make on-deck searchers be
used if they're needed even if they're not fully warmed yet. You might try that
option and see if it doesn't solve your 503 errors.
I'm fairly sure that this option (useCo
> A quick peek at the code (branch_4x, SolrCore.java, starting at line 1647)
> seems to confirm this.
It seems my understanding of that option was wrong! Thanks for correcting me
Shawn.
Greg
On Feb 19, 2014, at 11:19 AM, Shawn Heisey wrote:
> On 2/19/2014 8:59 AM, Greg Walters wrote:
>> I b
Juan,
Pay close attention to the boundary scanner you’re employing:
http://wiki.apache.org/solr/HighlightingParameters#hl.boundaryScanner
You can be explicit to indicate a type (hl.bs.type) with options such as
CHARACTER, WORD, SENTENCE, and LINE. The default is WORD (as the wiki
indicates) a
Hi,
If we setup a solr cloud with 3 nodes and then we have like 100+ million
documents to index. How we should be indexing a) will the indexing request be
going to each machine assuming we are able to divide data based on some field
or b) we should be sending the request to one end point and wh
Thanks, Chris. Adding autoWarming to the filter cache made another big
improvement.
Between increasing the soft commit to 60s, fixing the q:* query, and
autowarming the filter caches my 95% latencies are down to a very acceptable
range — almost an order of magnitude improvement. :-)
-Allan
O
Why don't you do parallel indexing and then merge everything into one and
replicate that from the master to the slaves in SolrCloud?
Thanks,
Kranti K. Parisa
http://www.linkedin.com/in/krantiparisa
On Wed, Feb 19, 2014 at 3:04 PM, Susheel Kumar <
susheel.ku...@thedigitalgroup.net> wrote:
> Hi,
Is there a way to get all the fields that are in a particular query?
Ultimately I'd like to restrict the fields that a user can use to search
so I want to make sure that there aren't any fields in the query that they
should not be allowed to search.
Hi Jamie,
May not be direct answer to your question but your Q reminded me edismax's uf
parameter.
http://wiki.apache.org/solr/ExtendedDisMax#uf_.28User_Fields.29
On Wednesday, February 19, 2014 11:18 PM, Jamie Johnson
wrote:
Is there a way to get all the fields that are in a particular q
Thanks for your reply, Kranti. If we want to shard the index into 3 nodes does
the slave/master concept will help and we are using solr 4.6 so should we
utilize the concept of master/slave or move to sharding concept.
-Original Message-
From: Kranti Parisa [mailto:kranti.par...@gmail.co
This actually may do what I want, I'll have to check. Right now we are
using Lucene directly and not Solr for this particular project, but if this
fits the bill we may be able to use just the query parser.
On Wed, Feb 19, 2014 at 4:30 PM, Ahmet Arslan wrote:
> Hi Jamie,
>
> May not be direct a
On closer inspection this isn't quite what I'm looking for. The
functionality is spot on, but I'm looking for a way to do this using a
Query Parser in Lucene core, i.e. StandardQueryParser unless folks have
experience with using the Solr query parsers with vanilla lucene? Though
I'd prefer to stic
Try asking the question on the Lucene user list - this is the Solr user
list.
Also, clarify whether you are trying to get the list of fields used in a
query or trying to limit the fields that can be used in a query. uf does the
latter, but your latest message suggested the former. You're confu
Maybe he can use updateable docvalues (LUCENE-5189)? I heard that was a
thing. Has it made its way into Solr in some way?
-Mike
On 2/19/2014 4:23 AM, Mikhail Khludnev wrote:
Just a side note. Sidecar index might be really useful for updating blocked
docs, but it's in experimenting stage iirc
Thanks Jack, I ultimately want to limit but I'd take getting them if that
was available. I'll post to the lucene list though
On Feb 19, 2014 8:22 PM, "Jack Krupansky" wrote:
> Try asking the question on the Lucene user list - this is the Solr user
> list.
>
> Also, clarify whether you are tryin
Thanks, Erick. I will try that
On Sun, Feb 16, 2014 at 5:07 PM, Erick Erickson wrote:
> Stored fields are what the Solr DocumentCache in solrconfig.xml
> is all about.
>
> My general feeling is that stored fields are mostly irrelevant for
> search speed, especially if lazy-loading is enabled.
Gregg,
The QueryResultCache caches a sorted int array of results matching the a query.
This should overlap very nicely with your desired behavior, as a hit in this
cache will not perform a Lucene query nor a need to calculate score.
Now, ‘for the life of the Searcher’ is the trick here. You
Here’s a rather obvious question: have you rebuilt your spell index recently?
Is it possible the offending numbers snuck into the spell dictionary? The
terms component will show you what’s in your current, searchable field…but not
the dictionary.
If my memory serves correctly, with collate=t
33 matches
Mail list logo