Hi Chris,
Thanks for the insight.
1. "omitTermFreqAndPositions" is very straightforward but if I avoid
positions I'll refuse to serve phrase queries. I had searched for this in
past as well but I finally reached to the conclusion that there is no thing
like "omitTermFreq" (only). Perhaps because
Hello everyone !
I'm new to Solr and i'm facing a huge problem since yesterday with the
SpellChecker component in Solr.
I followed instructions in the wiki page, browsed forums and i don't get any
result when typing a misspelled word like other people.
Here is what i have:
*schema.xml :*
Erick, As Brad has configured the system, I configured it in the same way and
then no document indexing was happening and i was not even getting any
errors in the log. I then changed my Tika to 0.6 and tried it but no
success. So table columns are getting indexed but document is not. Let me
know if
title^1.1 body^1.0 comments^0.5
Could someone explain me how to understand following query debug, and how
score is computed.
Here are 4 documents with "Idée" word in title, body or comments.
Results are in this order by score, I do not undestand why fourth document
is not second in the results.
Yep, you're on the right track here. Look at classes that implement
QueryResponseWriter,
perhaps CSVResponseWriter our XMLResponseWriter might be good places to start.
They get a SolrQueryResponse that contains the returned list...
Best
Erick
On Mon, Nov 7, 2011 at 2:29 PM, Draconissa wrote:
>
Several things:
1> You don't have EdgeNGramFilterFactory in your query analysis chain,
is this intentional?
2> You have a LOT of stuff going on here, you might try making your
analysis chain simpler and
adding stuff back in until you see the error. Don't forget to re-index!
3> Analysis doesn'
What's not clear is what you are doing to insure that the file names pulled
from your database are being read (from disk? from a shared filesystem
somewhere?), analyzed and sent to Solr.
So, somewhere you need to actually use the file name to pass on to
one of the processors that'll actually send
What does the debugQuery explanation show? The calculations
aren't all that precise for short fields. The length normalization is
encoded and is essentially the same for short fields.
Best
Erick
On Tue, Nov 8, 2011 at 7:22 AM, darul wrote:
> title^1.1 body^1.0 comments^0.5
>
> Could someone expl
Does pivot faceting solve this? At first glance, seems like the same thing,
but I haven't grokked fully the followup e-mails on this thread, sorry.
Erik
On Nov 7, 2011, at 13:03 , Steve Fatula wrote:
> So, I have a bunch of products indexed in Solr. Each product may exist in any
> n
@Prakash: Can your please format the body a bit for readability?
@Solr-Users: Is anybody else having any problems when running Zookeeper
from the latest code in the trunk(4.x)?
On Mon, Nov 7, 2011 at 4:44 PM, prakash chandrasekaran <
prakashchandraseka...@live.com> wrote:
>
> hi all, i followed
Hi Hoss,
Thanks for the quick response.
RE point 1) I'd mistyped (sorry) the incremental URL I'm using for updates.
Essentially every 5 minutes the system is making a HTTP call for...
http://localhost/solr/myfeed?clean=false&command=full-import&rows=5000
..which when accessed returns the followi
>From: Chris Hostetter
>To: Steve Fatula
>Cc: "solr-user@lucene.apache.org"
>Sent: Monday, November 7, 2011 7:17 PM
>Subject: Re: Faceting a multi valued field
>
>
>you can the
>level and most of the path and just index the "${parent_cat_id}:${cat_id}"
>tuples for every $cat_id the product i
All -
We're using DIH to import flat xml files. We're getting Heap memory
exceptions due to the file size. Is there any way to force DIH to do a
streaming parse rather than a DOM parse? I really don't want to chunk my
files up or increase the heap size.
Many Thanks!
Josh
Hello,
Did someone find a way to solve the parent-child problem? The Join option
is too complex because you have to create multiple document type and do the
join in the query.
ElasticSearch did a better job at solving this problem:
http://www.elasticsearch.org/guide/reference/mapping/nested-type.
Lucene itself has BlockJoinQuery/Collector (in contrib/join), which is
what ElasticSearch is using under the hood for its nested documents (I
think?).
But I don't think this has been exposed in Solr yet patches welcome!
Mike McCandless
http://blog.mikemccandless.com
On Tue, Nov 8, 2011 at 1
Erick,
Thank you for your response to my concerns! After reading some documentations,
I come up with the following "solution." It is not doing exactly what I would
like it to do, but close.
Basically I set hl.snippets to be a large int, e.g. 50, and hl.fragsize a small
positive int, e.g. 1. Th
Hi,
I have 10k records indexed using solr 1.4
We have a requirement to search within search results.
example: query for 'water' returns 2000 results. I need the second query
for 'diseases' to search within those 2000 results.(I cant add a facet as
the second search should also check non faceted
Wouldn't 'diseases AND water' or '+diseases +water' return you that result? Or
you could search on 'water' while filtering on 'diseases'.
Or am I missing something here?
François
On Nov 8, 2011, at 4:19 PM, sharnel pereira wrote:
> Hi,
>
> I have 10k records indexed using solr 1.4
>
> We hav
Why can't you add a filter query that is the original query? You can add an
arbitrary number of fq clauses, so you could build this up as long as
you want.
Best
Erick
2011/11/8 François Schiettecatte :
> Wouldn't 'diseases AND water' or '+diseases +water' return you that result?
> Or you could s
both work fine. Thanks.
On Tue, Nov 8, 2011 at 4:44 PM, Erick Erickson wrote:
> Why can't you add a filter query that is the original query? You can add an
> arbitrary number of fq clauses, so you could build this up as long as
> you want.
>
> Best
> Erick
>
> 2011/11/8 François Schiettecatte :
>
I have a normalized database schema that I have flattened out to create
a Solr schema. My question is with regards to searching the multivalued
fields that are correlated from the sub-entity in the DataInputHandler.
Example
I have 2 tables CUSTOMER and NOTE
Customer can have one to many n
Created an issue in jira for this features:
https://issues.apache.org/jira/browse/SOLR-2884
Martijn v Groningen-2 wrote:
>
> Ok I think I get this. I think this can be achieved if one could
> specify a filter inside a group and only documents that pass the
> filter get grouped. For example only
Hello,
I have custom pojo's, and I use solrj to read and index them with
getBeans() method.
So now, I want to store a spatially searchable data member in my pojo.
I have in my schema.xml:
and
-
so, what object type must I have in my bean? LatLonType does not seem
to have a constructor,
I'm a newbie with Solr. Is there a way to create document counts for a list
of keywords without using a facet field? For example, say I have a fruit
related web site and want to list on the main page the top fruits; apples
(23), oranges (14), pears (5), etc.
The fruits are the "keywords" that are
: But when feeding in a PDF I'm getting a permissions error but not sure
: how to tell where, exactly, the problem is or what I need to do to fix
: it?!?
Interesting.
The problem is coming from the "PDFBox" library used to parse
PDF files on your Solr server, but the origin of hte issue seems
Hi all, I am trying to upgrade solr... I have tried to find resources or
tutorials but couldnt find any - yes there is changes.txt in solr but it is
a list, not a tutorial...
so here is the error that i get:
SEVERE: java.lang.NoClassDefFoundError:
org/apache/lucene/search/FieldComparatorSource
Thanks Erick, here are my responses:
1. Yes. What I want to achieve is that when index is filtered with EdgeNgram,
and a query that is not filtered in that way, I can do search on partial string.
2. Good suggestion, will test it.
3. ok
4. Thank you
5/6. Will remove the synonyms and word delimite
i have tried putting lucene core 3.1 to tomcat common/lib folder... and the
error message is changed into this:
SEVERE: java.lang.RuntimeException:
org.apache.lucene.index.CorruptIndexException: Unknown format version: -9
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:960)
Hi,
I am getting the following error during the indexing.I am trying to index 14
million records but the document size is very minimal.
*Error:*
2011-11-08 14:53:24,634 ERROR [STDERR] (Thread-12)
java.lang.OutOfMemoryError: GC overhead limit exceeded
2011-11-08 14:54:07,910 ERROR [org.apache.coy
You can pass the full url to post.jar as an argument.
example -
java -Durl=http://localhost:8080/solr/update -jar post.jar
Regards,
Jayendra
On Wed, Nov 9, 2011 at 2:37 AM, 刘浪 wrote:
>
> Hi,
> I want to use post.jar to delete index.But my port is 8080. It is 8983
> default.
> How can
Thank you very much
Amos
--
> -原始邮件-
> 发件人: "Jayendra Patil"
> 发送时间: 2011年11月9日 星期三
> 收件人: solr-user@lucene.apache.org
> 抄送:
> 主题: Re: How to change the port of post.jar
>
> You can pass the full url to post.jar as an argument.
>
> example -
>
> java -Durl=http://localhost:8080/so
31 matches
Mail list logo