I am trying to figure out how to give weights to my suggestions but I can
find no documentation on how to do this correctly.
Here is my configuration:
solrconfig.xml
mySuggester
DocumentDictionaryFactory
FuzzyLookupFactory
suggest
popularity
textSuggest
Ok let me explain what I am trying to do first since there may be a better
approach. Recently I had been trying to increase solr's matching precision
by requiring that all of the words in a field match before allowing a match
on a field. I am using edismax as my query parser and since it tokenizes
The documentation is very unclear (at least to me) around Query Elevation
Component and filter queries (fq param)
The documentation for Solr 4.9 states:
The fq Parameter
Query elevation respects the standard filter query (fq) parameter. That is,
if the query contains the fq parameter, all results
I recently moved an index from 3.6 non-distributed to Solr Cloud 4.4 with
three shards. My company uses a boosting function with a value assigned to
each document. This boosting function no longer works dependably and I
believe the cause is that IDF is not distributed.
This seems like it should be
I am indexing documents using the domin:id format ex id = k-690kohler!670614
This ensures that all k-690kohler documents are indexed to the same shard.
This does cause numDocs that are not perfectly distributed across shards
probably even worse than the default sharding algorithm.
Here is the sear
I am running a simple query in a non-distributed search using grouping. I am
getting incorrect facet field counts and I cannot figure out why.
Here is the query you will notice that the facet field and facet query
counts are not the same. The facet query counts are correct. Any help is
appreciated
Here is my query String:
/solr/singleproductindex/productQuery?fq=siteid:82&q=categories_82_is:109124&facet=true&facet.query=HeatingArea_numeric:[0%20TO%20*]&facet.field=HeatingArea_numeric&debugQuery=true
Here is my schema for that field:
Here is my request handler definition:
if I do group=false&group.facet=false the counts are what they should be for
the ungrouped counts... seems like group.facet isn't working correctly
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-facet-field-counts-not-correct-tp4097305p4097314.html
Sent from the Solr -
Hoss created: https://issues.apache.org/jira/browse/SOLR-5383
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-facet-field-counts-not-correct-tp4097305p4097346.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am upgrading from 4.4 to 4.5.1
I used to just upload my configurations to zookeeper and then install solr
with no default core
Solr would give me an error that no cores were created when I tried to
access until I ran the collections API create command to make a collection
however now when I tr
Here is an example URL that gives the error:
solr/productindex/productQuery?fq={!collapse%20field=groupid}&fq=discontinued:false&fq={!tag=manufacturer_string}manufacturer_string:(%22delta%22%20OR%20%22kohler%22)&fq=siteid:82&sort=score%20desc&facet=true&start=0&rows=48&fl=productid,manufacturer,un
I ran into an error with the CollapsingQParserPlugin when trying to use it in
tandem with tagging
I get the following error whenever I use {!tag} in the same request as
{!collapse field=groupid}
Oct 31, 2013 6:43:56 PM org.apache.tomcat.util.http.Cookies
processCookieHeader
INFO: Cookies: Invalid
x");
params.add("bf", "field(test_ti)");
params.add("fq","{!tag=test_ti}test_ti:5");
params.add("facet","true");
params.add("facet.field","{!ex=test_ti}test_ti");
assertQ(req(params), "*[count(//doc)=1]",
"
I've created the following tracker for the issue:
https://issues.apache.org/jira/browse/SOLR-5416
--
View this message in context:
http://lucene.472066.n3.nabble.com/Error-with-CollapsingQParserPlugin-when-trying-to-use-tagging-tp4098709p4098862.html
Sent from the Solr - User mailing list archi
I'm having the same issue with solrJ 4.5.1
If I use the escapeQueryChars() function on a string like "a b c" it is
escaping it to "a\+b\+c" which returns 0 results using edismax query parser.
However "a b c" returns results.
--
View this message in context:
http://lucene.472066.n3.nabble.com/s
Thanks Shawn! That makes sense now. I appreciate the response.
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-4-3-solrj-generating-search-terms-that-return-no-results-tp4077137p4099615.html
Sent from the Solr - User mailing list archive at Nabble.com.
Where did you add that directive? I am having the same problem.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Error-when-creating-collection-in-Solr-4-6-tp4103536p4106248.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am running a data import and it is throwing all kinds of errors. I am
upgrading to 4.6 from 4.5.1 with the exact schema and solrconfig and dih
configs.
Here is the error I am getting:
org.apache.solr.common.SolrException: ERROR: [doc=k-690kohler!670614] Error
adding field 'weight'='java.math.Bi
Here is the output from the logs of the server running the import:
598413 [updateExecutor-1-thread-62] ERROR
org.apache.solr.update.StreamingSolrServers – error
org.apache.solr.common.SolrException: Bad Request
request:
http://solr-shard-5.sys.id.build.com:8080/solr/productindex/update?update.
And here are the logs of one of the replicas:
2286617 [Thread-146] WARN org.apache.solr.cloud.RecoveryStrategy –
Stopping recovery for zkNodeName=core_node2core=productindex
2286627 [Thread-147] WARN org.apache.solr.cloud.RecoveryStrategy –
Stopping recovery for zkNodeName=core_node2core=prod
Also I have tried setting my the schema defintion to float for the offending
fields in the schema.xml as well as tried to cast my columns to strings in
the query. Both still give the same result.
My java version is:
java version "1.7.0_10"
Java(TM) SE Runtime Environment (build 1.7.0_10-b18)
Java
I have created Jira issue here:
https://issues.apache.org/jira/browse/SOLR-5551
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Cloud-error-with-shard-update-tp4106260p4106448.html
Sent from the Solr - User mailing list archive at Nabble.com.
https://issues.apache.org/jira/browse/SOLR-5773
I am having trouble with CollapseQParserPlugin showing duplicate groups when
the search results contain a member of a grouped document but another member
of that grouped document is defined in the elevate component. I have
described the issue in more
he elevated document and the group
> head
> are in the result set. What you are suggesting is that the elevated
> document becomes the group head. We can discuss the best way to handle
> this
> on the new ticket.
>
> Joel
>
> Joel Bernstein
> Search Engineer at Helio
I'm trying to index certain data from a table and documents located on disk
using jdbc and tika. I can derive the file locations from the table and
using that data I want to also import documents into Solr. However I'm
having trouble with my configuration.
Got it working with the updated config:
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-DIH-using-JDBC-with-TIKA-tp4180737p4180742.html
Sent from the S
I'm having trouble with solrj generating a query like &q=kohler%5C+k for the
search term 'Kohler k'
I am using Solr 4.3 in cloud mode. When I remove the %5C everything is fine.
I'm not sure why the %5C is being added when I call
solrQuery.setQuery('Kohler k');
Any help is appreciated.
--
View
solrQuery.setQuery(ClientUtils.escapeQueryChars(keyword));
It looks like using the solrj ClientUtils.escapeQueryChars function is
escaping any spaces with %5C+ which returns 0 results at search time.
--
View this message in context:
http://lucene.472066.n3.nabble.com/solr-4-3-solrj-generating-
I noticed that Magento is using the overwritePending commit directive but I
can't find any documentation on this. Does the overwritePending directive
purge any added docs since the last commit? Any help would be appreciated.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Pr
Yes I confirmed in the logs. I have also committed manually several times
using the updatehandler /update?commit=true
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problems-with-documents-that-are-added-not-showing-up-in-index-Solr-3-5-tp4043539p4043716.html
Sent from the
s?
>
> Upayavira
>
> On Thu, Feb 28, 2013, at 06:07 PM, dboychuck wrote:
> > Yes I confirmed in the logs. I have also committed manually several
> times
> > using the updatehandler /update?commit=true
> >
> >
> >
> > --
> > View this message i
I have a dismax request handler with a default fq parameter.
explicit
0.01
sku^9.0 upc^9.1 searchKeyword^1.9 series^2.8 productTitle^1.2 productID^9.0
manufacturer^4.0 masterFinish^1.5 theme^1.1 categoryName^2.0 finish^1.4
searchKeyword^2.1 text^0.2 productTitle^1.5 manufacturer^4.0 finish^1.
Think I answered my own question... I need to use an appends list
--
View this message in context:
http://lucene.472066.n3.nabble.com/default-fq-in-dismax-request-handler-being-overridden-tp3768735p3768817.html
Sent from the Solr - User mailing list archive at Nabble.com.
Adding autoGeneratePhraseQueries="true" to my field definitions has solved
the problem
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-dismax-not-returning-expected-results-tp3891346p3891594.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am trying to import data through my db but I have dynamic fields that I
don't always know the names of. Can someone tell me why something like this
doesn't work.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Data-Import-H
I have created a custom transformer for dynamic fields but it doesn't seem to
be working correctly and I'm not sure how to debug it with a live running
solr instance.
Here is my transformer
package org.build.com.solr;
import org.apache.solr.handler.dataimport.Context;
import org.apache.solr.han
Also here is my schema
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Import-Handler-Custom-Transformer-not-working-tp3978746p3978748.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thank you for your input. With your help I was able to solve my problem.
Although I could find no good example of how to handle multivalued fields
with a custom transformer online your comments helped me to find a solution.
Here is the code that handles both multi-valued and single valued fields.
before I optimize (build my spellchecker index) my solr instance running in
tomcat uses about 2 gigs of memory
as soon as I optimize it jumps to about 5 gigs
http://d.pr/i/oUQI
it just doesn't seem right
http://pastebin.com/6Cg7F0dK
is there anything wrong with my configuration?
when i dump the
39 matches
Mail list logo