Hello - I am attempting to add the spellCheck component in my "search"
requesthandler so when a users does a search, they get the results and spelling
corrections all in one query just like the way the facets work.
I am having some trouble accomplishing this - can anyone point me to
documentati
spellcheck
From: Geoffrey Young [EMAIL PROTECTED]
Sent: Friday, July 25, 2008 2:13 PM
To: solr-user@lucene.apache.org
Subject: Re: Multiple search components in one handler - ie spellchecker
Andrew Nagy wrote:
> Hello - I am attempting to
From: Geoffrey Young [EMAIL PROTECTED]
Sent: Friday, July 25, 2008 3:04 PM
To: solr-user@lucene.apache.org
Subject: Re: Multiple search components in one handler - ie spellchecker
Andrew Nagy wrote:
> Thanks for getting back to me Geoff. Although, that
gt; Subject: Re: Multiple search components in one handler - ie
> spellchecker
>
> On Sat, Jul 26, 2008 at 12:37 AM, Andrew Nagy
> <[EMAIL PROTECTED]>
> wrote:
>
> > Exactly - however the spellcheck component is not working for my
> setup.
> > The spelling s
> -Original Message-
> From: Shalin Shekhar Mangar [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 28, 2008 10:09 AM
> To: solr-user@lucene.apache.org
> Subject: Re: SpellCheckComponent problems (was: Multiple search
> components in one handler - ie spellchecker)
>
> Can you show us the quer
l again? I'm sorry but it doesn't seem
> like
> a problem with the spell checker itself. Also check if there are any
> exceptions in the Solr log/console.
>
> On Mon, Jul 28, 2008 at 8:32 PM, Andrew Nagy
> <[EMAIL PROTECTED]>wrote:
>
> > > -Original
ts in one handler - ie spellchecker)
>
> No, SpellCheckComponent was in the nightly long before July 25. There
> must
> be a stack trace after that error message. Can you post that?
>
> On Mon, Jul 28, 2008 at 9:26 PM, Andrew Nagy
> <[EMAIL PROTECTED]>wrote:
>
> > I was just
ckComponent problems (was: Multiple search
> components in one handler - ie spellchecker)
>
> No, SpellCheckComponent was in the nightly long before July 25. There
> must
> be a stack trace after that error message. Can you post that?
>
> On Mon, Jul 28, 2008 at 9:26 PM, Andrew Nag
> -Original Message-
> From: Shalin Shekhar Mangar [mailto:[EMAIL PROTECTED]
> Sent: Monday, July 28, 2008 12:38 PM
> To: solr-user@lucene.apache.org
> Subject: Re: SpellCheckComponent problems (was: Multiple search
> components in one handler - ie spellchecker)
>
> Well that means the nigh
Hello - I am a part of a larger group working on an import tool called
SolrMarc. I am running into an error that I'm not sure what is causing it and
looking for any leads.
I am getting the following exception on the SolrCore constructor:
Exception in thread "main" java.lang.NoClassDefFoundError
I read on the Solr 1.3 wiki page that there is a code freeze as of today, is
this still accurate? Moreover - does this mean that Solr1.3 will most likely
ship with Lucene 2.4-dev or is there any plan to wait for lucene 2.4 to be
released?
I know scheduling questions are annoying, but I am curi
Chris - thanks for the alert. Can you please clarify the usage of the default
attribute that is documented to be used in the "core" node. Solr-545 has a
note about this being removed and it is not shown in the new example solr.xml
file.
Thanks
Andrew
> -Original Message-
> From: Chri
Okay - I found the removal of the default attribute in
https://svn.apache.org/viewvc/lucene/solr/trunk/src/java/org/apache/solr/core/MultiCore.java?
r1=606335&r2=602003
I will update the documentation on the multicore changes.
Andrew
> -Original Message-
> From: Chris Hostetter [mailto
Thank Grant for the update. We have found that the lucene-2.4-dev libs are a
bit out dated. My colleague is going to open a bug about this. Has any
thought been made as to what snapshot of the lucene-2.4-dev libs will be used
for solr 1.3? I also like the idea of renaming them to lucene-2.4-
I am trying to setup a multicore system implementation. I just upgraded to
today's snapshot and have converted my multicore.xml to solr.war and I also
changed the xml to match the new schema. However, now that I have done that,
Solr is not finding my data directory. With the use of multicore
Nevermind - sorry. The data directory in my solrconfig.xml was not changed to
the correct path. Now it's alive!
Andrew
> -Original Message-
> From: Andrew Nagy [mailto:[EMAIL PROTECTED]
> Sent: Wednesday, August 13, 2008 12:13 PM
> To: solr-user@lucene.apache.org
>
Doug - I had this same problem today. If you look at my post from earlier
today you will see the problem. You will need to adjust the solr.data.dir
value in the solrconfig.xml.
Maybe this also needs to be changed in the example solrconfig.xml document?
Andrew
> -Original Message-
> F
Thanks for clarifing that Ryan - I was a bit confused too...
> Before 1.3 is released, you will either be able to:
> 1. set the dataDir from your solr.xml config
>
>
I have been perusing the multicore code and found that the "default" attribute
was removed. It also appears that the "dataDir
Hello - I stumbled across an odd error which my intuition is telling me is a
bug.
Here is my installation:
Solr Specification Version: 1.2.2008.08.13.13.05.16
Lucene Implementation Version: 2.4-dev 685576 - 2008-08-13 10:55:25
I did the following query today:
author:(r*a* AND fisher)
And get th
Hello - I have the following field:
However, when I do a search, the url field does not display. Does the field
also need to be indexed in order to retrieve the data?
Thanks
Andrew
-indexed but stored field
On Oct 14, 2008, at 12:16 PM, Andrew Nagy wrote:
> Hello - I have the following field:
>
> multiValued="true"/>
>
> However, when I do a search, the url field does not display. Does
> the field also need to be indexed in order to retrieve th
-indexed but stored field
On Oct 14, 2008, at 12:16 PM, Andrew Nagy wrote:
> Hello - I have the following field:
>
> multiValued="true"/>
>
> However, when I do a search, the url field does not display. Does
> the field also need to be indexed in order to retrieve th
, Andrew Nagy <[EMAIL PROTECTED]> wrote:
> Sorry for the late follow-up. I am doing this, but get nothing back.
Did you change the field to "stored" in the schema after you added the document?
I've never seen anyone having this problem, so perhaps verify that you
are actual
Hello, I was wondering if there is a way to facet on certain characters of a
field. For example, I would like to get a facet count on how many of my titles
start with the letter A, B, C, etc.
Is this possible with SOLR?
Thanks
Andrew
Hello, I was thinking that solr - with its built in faceting - would make for a
great apache log file storage system. I was wondering if anyone knows of any
module or library for apache to write log files directly to solr or to a lucene
index?
Thanks
Andrew
Here are a few SOLR performance questions:
1. I have noticed with 500,000+ records that my facets run quite fast regarding
my dataset when there is a large number of matches, but on a small result set
(say 10 - 50) the facet queries become very slow. Any suggestions as to how to
improve this?
, 2007 5:15 PM
To: solr-user@lucene.apache.org
Subject: Re:
Andrew Nagy wrote:
> Hello - I am trying out the CSV importer and am curious with an error that I
> am consistently running into. What am I doing incorrectly here? I am
> importing a pipe delimited CSV file with quotes enca
Hello - I am trying out the CSV importer and am curious with an error that I am
consistently running into. What am I doing incorrectly here? I am importing a
pipe delimited CSV file with quotes encapsulation.
Thanks
Andrew
curl
http://localhost:8080/solr/update/csv?header=true%26seperator=%7
> On Dec 2, 2007, at 5:43 PM, Ryan McKinley wrote:
>>
>>
>> try \& rather then %26
>
>
> or just put quotes around the whole url. I think curl does the right thing
> here.
I tried all the methods: converting & to %26, converting & to \& and
encapsulating the url with quotes. All give the same e
Ugh ... I shouldn't be coding on a sunday night - especially after the eagles
lost again!
I spelled separator correctly this time :) - But still no luck.
curl
'http://localhost:8080/solr/update/csv?header=true&separator=%7C&encapsulator=%22&commit=true&stream.file=import/homes.csv'
-H 'Content
Ryan, i didn't know there was a debugger - this could come in handy for other
things. Thanks!
I tried it out and it looks like everything is being parsed correctly when
passing the url in quotes:
curl
"http://localhost:8080/solr/debug/dump?header=true&separator=%7C&encapsulator=%22&commit=tru
ot;
Looks like if you don't specify header=true, it defaults to true - but
if you do, it throws an error.
I think there may be a bug... Yonik, should line 243 be:
} else if (!hasHeader) {
^!!!
ryan
Andrew Nagy wrote:
> Ryan, i didn't know there was a debugger - this
I am testing around with a new feature in my system that uses Solr and I am
testing a query that has a search on the same field OR'd together over 150
times. I know this sounds pretty ridiculous and as I said I am just playing
around. However Solr just returns a blank page and doesn't process
Hello - I stumbled upon a odd bug, or what appears to be a bug, today. I have
been using my own custom version numbers for my schema and tried to change the
version number from 0.8 to 0.8.1 which rendered solr useless yielding a schema
parsing error. I then tried to change it to 0.8-1 with the
Hello - I was wondering if there is a work around with POSTing repeated fields
to Solr. I am using Jetty as my container with Solr 1.2.
I tried something like:
http://localhost:8080/solr/select/?q=author:(smith)&rows=0&start=0&facet=true&facet.mincount=1&facet.limit=10&facet.field=authorlast&fac
> On 4-Jun-08, at 2:22 PM, Andrew Nagy wrote:
>
> > Hello - I was wondering if there is a work around with POSTing
> > repeated fields to Solr. I am using Jetty as my container with Solr
> > 1.2.
> >
> > I tried something like:
> >
> http://localh
Hello, I am new to SOLR but very excited for it's possibilities.
I am having some difficulties with my data import which I hope can be
solved very easily.
First I wrote an xslt to transform my xml into the solr schema and
modified the schema.xml to match the fields that I created. I then ran
What is necessary for the effects of changing the schema.xml to take
effect for all of my records? I restarted tomcat, but it does not seem
that my changes have taken effect.
I wanted to change a full-text field from type:string to type:text to
allow for better searching, but do no see any di
In September there was a thread [1] on this list about heterogeneous
facets and their performance. I am having a similar issue and am
unclear as the resolution of this thread.
I performed a search against my dataset (492,000 records) and got the
results I am looking for in .3 seconds. I then
Yonik Seeley wrote:
1) facet on single-valued strings if you can
2) if you can't do (1) then enlarge the fieldcache so that the number
of filters (one per possible term in the field you are filtering on)
can fit.
I wll try this out.
3) facet counts are limited to the results of the query, fi
Yonik Seeley wrote:
1) facet on single-valued strings if you can
2) if you can't do (1) then enlarge the fieldcache so that the number
of filters (one per possible term in the field you are filtering on)
can fit.
I changed the filterCache to the following:
However a search that normally t
Yonik Seeley wrote:
On 12/8/06, Andrew Nagy <[EMAIL PROTECTED]> wrote:
I changed the filterCache to the following:
However a search that normally takes .04s is taking 74 seconds once I
use the facets since I am faceting on 4 fields.
The first time or subsequent times?
I
Chris Hostetter wrote:
: Could you suggest a better configuration based on this?
If that's what your stats look like after a single request, then i would
guess you would need to make your cache size at least 1.6 million in order
for it to be of any use in improving your facet speed.
Would th
Yonik Seeley wrote:
Are they multivalued, and do they need to be.
Anything that is of type "string" and not multivalued will use the
lucene FieldCache rather than the filterCache.
The author field is multivalued. Will this be a strong performance issue?
I could make multiple author fields as
J.J. Larrea wrote:
Unfortunately which strategy will be chosen is currently undocumented and
control is a bit oblique: If the field is tokenized or multivalued or Boolean,
the FilterQuery method will be used; otherwise the FieldCache method. I expect
I or others will improve that shortly.
Erik Hatcher wrote:
On Dec 8, 2006, at 2:15 PM, Andrew Nagy wrote:
My data is 492,000 records of book data. I am faceting on 4 fields:
author, subject, language, format.
Format and language are fairly simple as their are only a few unique
terms. Author and subject however are much
Hello, me again.
I have been running some extensive tests of my search engine and have
been seeing inaccuracies with the "numFound" attribute. It tends to
return 1 more than what is actually show in the XML.
Is this a bug, or could I be doing something wrong?
I have a specific example in fr
- Original Message -
From: Yonik Seeley <[EMAIL PROTECTED]>
Date: Friday, December 8, 2006 6:01 pm
Subject: Re: Result: numFound inaccuracies
To: solr-user@lucene.apache.org
>
> start is 0 based :-)
>
Man do I feel dumb!
Andrew
I installed the 12-8 snapshot of solr on my 64bit RH AS server and
whenever I go to the admin page I get the following error:
SEVERE: Servlet.service() for servlet jsp threw exception
java.lang.NoClassDefFoundError: Could not initialize class
org.apache.solr.core.SolrCore
Any ideas as to what
. So the CWD should be correct. I
have this same setup on another server that I have been working on with
no problem. Im kinda lost with this one.
Is their a setting in the solrconfig.xml file that I should be looking at?
Andrew
Yonik Seeley wrote:
On 12/11/06, Andrew Nagy <[EMAIL PROTEC
Nevermind, I got it working now. Had the paths setup incorrectly.
Dumb++
Andrew
Andrew Nagy wrote:
Thanks Yonik for the reply. I am using tomcat, and there is nothing in
the catalina.out file. The access log just reports the same error I
see in the browser which is reported below.
I am
I was wondering how I might create multiple collections that have
different field sets under solr. Would I have to have multiple
implementations of solr running, or can I have more than one schema.xml
file per "collection" ?
Thanks
Andrew
I have 2 questions about the SOLR relevancy system.
1. Why is it when I search for an exact phrase of a title of a record I
have it generally does not come up as the 1st record in the results?
ex: title:(gone with the wind), the record comes up 3rd. A record with
the term "wind" as the first
Yonik Seeley wrote:
Things you can try:
- post the debugging output (including score explain) for the query
I have attached the output.
- try disabling length normalization for the title field, then remove
the entire index and re-idnex.
- try the dismax handler, which can generate sloppy phrase
Yonik Seeley wrote:
On 1/23/07, Andrew Nagy <[EMAIL PROTECTED]> wrote:
Yonik Seeley wrote:
> Things you can try:
> - post the debugging output (including score explain) for the query
I have attached the output.
> - try disabling length normalization for the title field, then remo
Yonik Seeley wrote:
Ok, here is your query:
title:(gone with the wind) OR title2:(gone
with the wind)
And here it is parsed:
(title:gone title:wind) (title2:gone
title2:wind)
First, notice how stopwords were removed, so "with" and "the" will not
count in the results.
You are querying across t
Yonik Seeley wrote:
What about term ranking, could I rank the phrases searched in title
higher than title2?
Absolutely... standard lucene syntax for boosting will give you that
in the standard query handler.
title:(gone with the wind)^3.0 OR title2:(gone with the wind)
That did it! Thanks f
Bertrand Delacretaz wrote:
On 1/31/07, Brian Whitman <[EMAIL PROTECTED]> wrote:
Does Solr have support for the Lucene query-contrib "MoreLikeThis"
query type or anything like it? ...
Yes, there's a patch in http://issues.apache.org/jira/browse/SOLR-69 -
if you try it, please add your comments
Gunther, Andrew wrote:
Yes most all terms are multi-valued which I can't avoid.
Since the data is coming from a library catalogue I am translating a
subject field to make a subject facet. That facet alone is the biggest,
hovering near 39k. If I remove this facet.field things return faster.
So a
Hmm ... I had a brain storm.
Could I do something like this:
Dir1/Subdir1/SubSubDir1
Then query collection:"Dir1/Subdir1" and get the facets on collection at
that point to see all of the subsubdirectories?
Is their any better method?
Andrew
Andrew Nagy wrote:
I am running into a
I am running into a stumbling block and can only find a way to solve the
problem with some sort of hierarchical faceting system. I am in the
process of moving my records from eXist (http://exist.sf.net) to Solr,
but the problem is with the lack of a "directory structure" that exist
has. I fig
Hello, I am trying to install another copy of solr on a server. I have
done this many times before, but am running into errors now and I am not
sure what is causing them.
I unzipped a copy of 1.1.0 and placed the .war file into tomcat. Then I
created the solr directory with my bin, data, con
into the code guts and try to contribute.
Andrew
Yonik Seeley wrote:
On 3/7/07, Andrew Nagy <[EMAIL PROTECTED]> wrote:
Hello, I am trying to install another copy of solr on a server. I have
done this many times before, but am running into errors now and I am not
sure what is causing th
Is their a way to not return any docs and only facets? I tried setting
the fl equal to blank, but then i get everything back.
Thanks
Andrew
Is their a science to choosing a cache sizes? I have about 500,000
records and am seeing a lot of evictions, about 50% of lookups. What
factors can i look at to determine what my cache sizes should be?
Here are my cache statistics:
filterCache
class: org.apache.solr.search.LRUCache
ver
Hello
I have 2 fields that I am faceting on, both of which are of type
"string." The first field is a copyfield from a "text" field copied to
a "string" field for faceting. The other is purely a "string" field.
The faceted results of the copyfield are accurate; however the facet
results of
Hello, I downloaded the latest nightly snapshot of Solr and replaced my
existing war with the new one. Once I restarted tomcat, I get this error:
SEVERE: Error filterStart
Apr 5, 2007 10:11:28 AM org.apache.catalina.core.StandardContext start
SEVERE: Context [/solr] startup failed due to previo
Does anyone have a good method of debugging a schema?
I have been struggling to get my new schema to run for the past couple
of days and just do not see anything wrong with the XML document.
Thanks
Andrew
Ryan McKinley wrote:
What errors are you getting? Are there exceptions in the log when it
starts up?
Just a null pointer exception.
I added a few fields to my schema, and then replaced my solr war file
with the latest build (see my message from a week ago). It wouldn't
work, so I assumed so
Andrew Nagy wrote:
Ryan McKinley wrote:
What errors are you getting? Are there exceptions in the log when it
starts up?
Just a null pointer exception.
I added a few fields to my schema, and then replaced my solr war file
with the latest build (see my message from a week ago). It wouldn
Ryan McKinley wrote:
Are you using the example solrconfig.xml? The stack trace looks like
an error finding the solr.home index directory, that is configured in
solrconfig.xml, not schema.xml
Yeah, I noticed that too ... but I don't understand why it can't find
the home. I have the data home se
Greg Ludington wrote:
I just installed SOLR-75 patch and the "schema browser" is able to view
the schema perfectly. When I used the default schema with SOLR,
everything is fine, but when I replace my schema ... it's throws this
NullPointerException
One thing to note is that the "schema br
Andrew Nagy wrote:
Ryan McKinley wrote:
If the example schema.xml works, can you try adding a little bit of
your schema at a time?
Yeah, that is my last resort. I guess I have no choice!
So I did this, and I ended up removing all of my fields and copyfields
with still no luck. I took a
Yonik Seeley wrote:
On 4/11/07, Andrew Nagy <[EMAIL PROTECTED]> wrote:
> If the example schema.xml works, can you try adding a little bit of
> your schema at a time?
Yeah, that is my last resort. I guess I have no choice!
That certainly is strange... Sounds like you definitely h
Ryan McKinley wrote:
Off topic a bit, Has anyone set forth to build a new admin interface for
SOLR? I build a lot of admin interfaces for my day job and would love
to give the admin module a bit of a tune-up (I won't use the term
overhaul).
i think we definitely need an updated admin inter
So maybe I am doing something really dumb, but for testing purposes I
took a copy of the example schema.xml file and removed all of the fields
except for 2 to start with something really basic and it is throwing the
nullpointerexception again.
I attached the file if it is of any help
I am usi
Yonik Seeley wrote:
I dropped your schema.xml directly into the Solr example (using
Jetty), fired it up, and everything works fine!?
Okay, I switched over to Jetty and now I get a different error:
SEVERE: org.apache.solr.core.SolrException: undefined field text
Are you sure you are using the s
Ryan McKinley wrote:
With a clean checkout, you can run:
$ ant example
$ cd example
$ java -jar start.jar
and things work OK.
But, when you delete all but the two fields, you get an exception
somewhere?
Well, I was working from my own directory, not the example directory. I
can give that a t
Yonik Seeley wrote:
Oh wait... Andrew, were you always testing via "ping"?
Check out what the ping query is configured as in solrconfig.xml:
qt=dismax&q=solr&start=3&fq=id:[* TO *]&fq=cat:[*
TO *]
Perhaps we should change it to something simple by default??? "q=solr"?
That solv
Hello, I would like to play with patch SOLR-69 and am trying to rebuild
solr using ant with some difficulties. When I try to run ant, I get an
error saying that it can't find junit, but junit is in the ant lib
directory. I had a file called ant-junit.jar in the lib directory and
copied it to
Erik Hatcher wrote:
ant-junit.jar != junit.jar
rename it back, and grab junit.jar from junit.org
Easy enough.
Thanks!
Andrew
I downloaded and patched my solr source with the latest solr69 patch and
whenever I run ant I get an error:
[javac]
/office/src/apache-solr-nightly/src/java/org/apache/solr/handler/MoreLikeThisHandler.java:145:
cannot find symbol
[javac] symbol : variable DEFAULT_MIN_DOC_FREQ
[javac] locatio
error in the Lucene version.
On May 16, 2007, at 12:13 PM, Andrew Nagy wrote:
> I downloaded and patched my solr source with the latest solr69
> patch and whenever I run ant I get an error:
>
> [javac] /office/src/apache-solr-nightly/src/java/org/apache/solr/
> handler/MoreLikeThisH
EMAIL PROTECTED]
Sent: Thursday, May 24, 2007 10:36 AM
To: solr-user@lucene.apache.org
Subject: Re: compile error with SOLR 69 MoreLikeThis patch
On May 24, 2007, at 10:29 AM, Andrew Nagy wrote:
> That did the trick. However, now I am trying to apply the patch to
> a fresh copy of solr on ano
Chris, thanks for the tip. I think I am okay with pushing the trunk to my
production server. As we say around here, if you want to be on the bleeding
edge, you have to be okay with bleeding every once in a while :)
Thanks again
Andrew
From: Chris Hostet
While I am on this topic, I think it might be nice to have a nightly build for
downloading or is their something like that in place, now?
From: Chris Hostetter [EMAIL PROTECTED]
Sent: Thursday, May 24, 2007 2:18 PM
To: solr-user@lucene.apache.org
Subject: R
available here: <http://people.apache.org/builds/lucene/
solr/nightly/> (a link exists on the wiki main page, for future
reference).
Erik
On May 24, 2007, at 2:28 PM, Andrew Nagy wrote:
> While I am on this topic, I think it might be nice to have a
> nightly build for downloading
Hello, I have been playing off and on with the more like this patch and I
really want to get it working well. I have the patch installed and I have
about 500K bibliographic records in my solr index.
My MLT query uses a fieldlist of about 5 or 6 fields. There are a mix of
string and text fiel
Hello, is there any documentation on how to use the new spell check module?
Thanks
Andrew
Hello, I would like to generate a list of facets, let's say on 5 fields. I
have the facet limit set to 5 so that for each of the 5 fields there will only
by up to 5 values.
My question is: Is there a way to change the limit per field? Let's say on
facet 2 I would like to display 10 values ins
90 matches
Mail list logo