Yes, Erick, I did. Actually the course of events was as follows. I started
with the example config files (solrconfig.xml & schema.xml) and added my own
fields. In my search I have 2 clauses: for a phrase and for a set of
keywords. And from the very beginning it worked fine. Until on the second
day
On Wed, Jun 3, 2009 at 1:59 AM, anuvenk wrote:
>
> I have to search over multiple fields so passing everything in the 'q'
> might
> not be neat. Can something be done with the facet.query to accomplish this.
> I'm using the facet parameters. I'm not familiar with java so not sure if a
> function
1: modify ur schema.xml:
like
2: add your field:
3: add your analyzer to {solr_dir}\lib\
4: rebuild newsolr and u will find it in {solr_dir}\dist
5: follow tutorial to setup solr
6: open your browser to solr admin page, find analyzer to check analyzer, it
will tell u how to ana
I've tried to read up on how to decide, when writing a query, what
criteria goes in the q parameter and what goes in the fq parameter, to
achieve optimal performance. Is there some documentation that
describes how each field is treated internally, or even better, some
kind of rule of thumb
Hello,
300K is a pretty small index. I wouldn't worry about the number of synonyms
unless you are turning a single term into dozens of ORed terms.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: anuvenk
> To: solr-user@lucene.apache.org
I'm using query time synonyms. I have more fields in my index though. This is
just an example or sample of data from my index. Yes, we don't have millions
of documents. Could be around 300,000 and might increase in future. The
reason i'm using query time synonyms is because of the nature of my dat
Right now we figured out the insert new documents problem, which was by
removing "special" ascii chars not accepted for XML on SOLR 1.3
The question is now: how to config SOLR 1.3 with the chinese support!
James liu-2 wrote:
>
> u means how to config solr which support chinese?
>
> Update prob
Hey,
>From what you have written I'm guessing that in your schema.xml file, you
have defined the field manu to be of type "text", which is good for keyword
searches, as the text type indexes on whitespace, i.e. Dell Inc. is indexed
as dell, inc. so keyword searches matches either dell or inc. But
I'm glad my late night explanation helped.
You may be right about there being a better name for this functionality.
Note that we do have support for file-based (dictionary-like) spellchecker, too.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
>
Excellent. Now everything make sense to me. :-)
The spell checking suggestion is the closest variance of user input that
actually existed in the main index. So called "correction" is relative the
text existed indexed. So there is no need for a brute force list of all
correctly spelled words. Mayb
Hi,
If index-time synonym expansion/indexing is used, then a large synonym file
means your index is going to be bigger.
If query-time synonym expansion is used, then your queries are going to be
larger (i.e. more ORs, thus a bit slower).
How much, it really depends on your specific synonyms, s
Hello,
In short, the assumption behind this type of SC is that the text in the
main index is (mostly) correctly spelled. When the SC finds query
terms that are close in spelling to words indexed in SC, it offers
spelling suggestions/correction using those presumably correctly spelled terms
(the
In my index i have legal faqs, forms, legal videos etc with a state field for
each resource.
Now if i search for real estate san diego, I want to be able to return other
'california' results i.e results from san francisco.
I have the following fields in the index
title
Thanks Otis!!!
On Tue, 2009-06-02 at 14:13 -0700, Otis Gospodnetic wrote:
> Hi Darren,
>
> Yes, it is possible! :)
> First you need to make sure your Solr has multiple indices using one of the
> following options:
>
> http://wiki.apache.org/solr/MultipleIndexes
>
> The most popular approach is
Sorry for not be able to get my point across.
I know the syntax that leads to a index build for spell checking. I actually
run the command saw some additional file created in data\spellchecker1
directory. What I don't understand is what is in there as I can not trick
Solr to make spell suggestion
Are you trying to find items with size > 7? If so, 7* is not the way to do
that - 7* will find items whose "size" field starts with "7", e.g. 7, 70, 71,
72, 72, 73...79, 700, 701
What you may want is an open-ended range query: q=size:[7 TO *] (I think that's
the correct syntax, but pleas
The spell checking dictionary should be built on startup with spellchecking
is enabled in the system.
First we defined the component in solrconfig.xml. Notice how it has
buildOnCommit to tell it rebuild the dictionary.
default
solr.IndexBasedSpellChecker
field
./s
Hi Darren,
Yes, it is possible! :)
First you need to make sure your Solr has multiple indices using one of the
following options:
http://wiki.apache.org/solr/MultipleIndexes
The most popular approach is the MultiCore approach. If you go that route,
then you query things like in this example:
Hello,
This is how you build the SC index:
http://wiki.apache.org/solr/SpellCheckComponent#head-78f5afcf43df544832809abc68dd36b98152670c
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Yao Ge
> To: solr-user@lucene.apache.org
> Sent: Tues
Yes. I did. I was not able to grasp the concept of making spell checking
work.
For example, the wiki page says an spell check index need to be built. But
did not say how to do it. Does Solr buid the index out of thin air? Or the
index is buit from the main index? or index is built form a dictionar
Hi,
Pardon if this question has an answer I missed in the archives. I
couldn't find it or in the docs (again, I may have missed it). But what
I want to do is submit docs to Solr as usual, but also tell Solr which
index to store the doc and then be able to query also providing which
index to keep
Did you by any chance change your schema? Rename a field? Change your
analyzers? etc? between the time you originally
generated your index and blowing it away?
I'm wondering if blowing away your index and regenerating just
caused any changes in how you index/search to get picked
up...
Best
Erick
I have to search over multiple fields so passing everything in the 'q' might
not be neat. Can something be done with the facet.query to accomplish this.
I'm using the facet parameters. I'm not familiar with java so not sure if a
function query could be used to accomplish this. Any other thoughts?
Hello,
I am wondering why solr is returning a manufacturer name field ( Dell, Inc) as
Dell one result and Inc another result. Is there a way to facet a field which
have space or delimitation on them?
query.addFacetField("manu");
query.setFacetMinCount(1);
query.setIncludeScore(true
Hmmm... It looks a bit magic. After 3 days of experimenting with various
parameters and getting only wrong results, I deleted all the indexed data
and left the minimum set of parameters: qs=default (I omitted it),
StopWords=off (StopWordsFilter was commented out), no copyFields,
requestHandler=sta
Sorry for the additional message, the disclaimer was missing.
Disclaimer: The code that was used was taken from the following site:
http://e-mats.org/2008/04/using-solrj-a-short-guide-to-getting-started-with-solrj/
.
ahammad wrote:
>
> Hello,
>
> I played around some more with it and I found
Hello,
I played around some more with it and I found out that I was pointing my
constructor to an older class that doesn't have the MultiCore capability.
This is what I did to set up the shards:
query.setParam("shards",
"localhost:8080/solr/core0/,localhost:8080/solr/core1/");
I do have a new
With DeDuplication path I create a signature field to control duplicates wich
is a MD5 of 3 different fields:
hashField = hash (fieldA + fieldB +fieldC)
With MoreLikeThis I want to show fieldA
There are documents that DeDuplication will not consider duplicates because
filedC was diferent for each
But why does MLT return duplicates in the first place? That seems strange to
me. If there are no duplicates in your index, how does MLT manage to return
dupes?
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: Marc Sturlese
> To: solr-u
Hi,
I am new to Lucene forum and it is my first question.I need a clarification
from you.
Requirement:--1. Build a IT search tool for logs similar to
that of Splunk(Only wrt searching logs but not in terms of reporting, graphs
etc) using solr/lucene. The log files are mainly the
I assume you are using the StandardRequestHandler, so this should work:
http://192.168.105.54:8983/solr/itas?q=size:7* AND extension:pdf
Also have a look at the follwing links:
http://wiki.apache.org/solr/SolrQuerySyntax
http://lucene.apache.org/java/2_4_1/queryparsersyntax.html
Thomas
Jörg A
I'm still not sure what you meant. I took a look at that class but I haven't
got any idea on how to proceed.
BTW I tried something like this
query.setParam("shard", "http://localhost:8080/solr/core0/"; ,
"http://localhost:8080/solr/core1/";);
But it doesn't seem to work for me. I tried it with
Actually, "my phrase here"~0 (for an exact match) didn't work I tried, just
for to experiment, to put "qs=100".
Otis Gospodnetic wrote:
>
>
> And "your phrase here"~100 works?
>
> Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
>
>
>
> - Original Message
>>
Thank You!
Francis
-Original Message-
From: Koji Sekiguchi [mailto:k...@r.email.ne.jp]
Sent: Monday, June 01, 2009 5:14 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr.war
They are identical. solr.war is a copy of apache-solr-1.3.0.war.
You may want to look at example target in bui
You should be able to set any name=value URL parameter pair and send it to Solr
using SolrJ. What's the name of that class... MapSolrParams, I believe.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: ahammad
> To: solr-user@lucene.apach
Glad to hear that it's not a problem with my setup.
Thanks for taking care of it! :)
Shalin Shekhar Mangar wrote:
>
> On Tue, Jun 2, 2009 at 8:06 PM, Steffen B.
> wrote:
>
>>
>> I'm trying to debug my DI config on my Solr server and it constantly
>> fails
>> with a NullPointerException:
>> Jun
Hi users...
i have a Problem...
i will search for:
http://192.168.105.54:8983/solr/itas?q=size:7*&extension:db
i mean i search for all documents they are size 7* and extension:pdf,
But it dosent work
i get some other files, with extension doc ore db
what is Happens about ?
Jörg
Have you gone through: http://wiki.apache.org/solr/SpellCheckComponent
On Jun 2, 2009, at 8:50 AM, Yao Ge wrote:
Can someone help providing a tutorial like introduction on how to get
spell-checking work in Solr. It appears many steps are requires
before the
spell-checkering functions can be
On Tue, Jun 2, 2009 at 8:06 PM, Steffen B. wrote:
>
> I'm trying to debug my DI config on my Solr server and it constantly fails
> with a NullPointerException:
> Jun 2, 2009 4:20:46 PM org.apache.solr.handler.dataimport.DataImporter
> doFullImport
> SEVERE: Full Import failed
> java.lang.NullPoint
Can someone help providing a tutorial like introduction on how to get
spell-checking work in Solr. It appears many steps are requires before the
spell-checkering functions can be used. It also appears that a dictionary (a
list of correctly spelled words) is required to setup the spell checker. Can
And "your phrase here"~100 works?
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: SergeyG
> To: solr-user@lucene.apache.org
> Sent: Tuesday, June 2, 2009 11:17:23 AM
> Subject: Re: Phrase query search returns no result
>
>
> Thanks, Oti
Thanks, Otis.
Checking for the stop words was the first thing I did after getting the
empty result. Not all of those words are in the stopwords.txt file. Then
just for experimenting purposes I commented out the StopWordsAnalyser during
indexing and reindexed. But the phrase was not found again.
Hello,
I have a MultiCore install of solr with 2 cores with different schemas and
such. Querying directly using http request and/or the solr interface works
very well for my purposes.
I want to have a proper search interface though, so I have some code that
basically acts as a link between the s
Your stopwords were removed during indexing, so if all those terms were
stopwords, and they likely were, none of them exist in the index now. You can
double-check that with Luke. You need to remove stopwords from the index-time
analyzer, too, and then reindex.
Otis
--
Sematext -- http://sem
Hi,
I'm trying to debug my DI config on my Solr server and it constantly fails
with a NullPointerException:
Jun 2, 2009 4:20:46 PM org.apache.solr.handler.dataimport.DataImportHandler
processConfiguration
INFO: Processing configuration from solrconfig.xml: {config=dataconfig.xml}
Jun 2, 2009 4:20:
Hi,
I'm trying to implement a full-text search but can't get the right result
with a Phrase query search. The field I search through was indexed as a
"text" field. The phrase was "It was as long as a tree". During both
indexing and searching the StopWordsFiler was on. For a search I used these
se
Hi... I have in Solritas a Problem with the Faces...
I seach for "Ipod" ore "plesnik" and the faces are say,
(PDF) 39
(TXT)109
(DOC)1200
When i click on the PDF, i want to see 39 PDF´s with te Kayword "plesnik"
but i get more the 800 thats are all pdf´s in the index..
is this a Bug? ore a Feture
U can find answer in tutorial or example
On Tuesday, June 2, 2009, The Spider wrote:
>
> Hi,
> I am using solr nightly bind for my search.
> I have to search in the location field of the table which is not my default
> search field.
> I will briefly explain my requirement below:
> I want to ge
u means how to config solr which support chinese?
Update problem?
On Tuesday, June 2, 2009, Fer-Bj wrote:
>
> I'm sending 3 files:
> - schema.xml
> - solrconfig.xml
> - error.txt (with the error description)
>
> I can confirm by now that this error is due to invalid characters for the
> XML form
49 matches
Mail list logo