On Oct 30, 2007, at 5:16 PM, Jae Joo wrote:
I am trying to delete the document remotly through curl command,
but got the
internal server error - Permission Denied.
Anyone knows how to solve this problem?
Do you see a full stack trace error in the Solr console or log? If
so, what is it?
There is more to consider here. Lucene now supports "payloads",
additional metadata on terms that can be leveraged with custom
queries. I've not yet tinkered with them myself, but my understanding
is that they would be useful (and in fact designed in part) for
representing structured docume
On Oct 31, 2007, at 10:04 AM, Brian Carmalt wrote:
There is more to consider here. Lucene now supports "payloads",
additional metadata on terms that can be leveraged with custom
queries. I've not yet tinkered with them myself, but my
understanding is that they would be useful (and in fact
Usability consideration,
Not really answering your question, but i must comment using searching on items
up to 100k makes faceted navigation very effective..but becomes least effective
past 100k..u may want to consider breaking up the 500k documents in
categories(typical breadcrumb) to 100k to f
Hi,
I am creating an index of approx 500K documents. I wrote an indexing
program using embeded solr: http://wiki.apache.org/solr/EmbeddedSolr
and am seeing probably a 10 fold increase in indexing speeds. My
problem is though, that if I try to reindex say 20K docs at a time it
slows down c
Hiya,
I would just like to report that the problem with snapinstaller turned out to
be that the default crontab environment PATH did not include path to where
curl is installed (/usr/local/bin).
so line 111 of commit script was failing :
rs=`curl ${curl_url} -s -H 'Content-type:text/xml;
cha
Greetings Brendan,
In the solrconfig.xml file, under the updateHandler, is an auto commit
statement.
It looks like:
1000
1000
I would think you would see better performance by allowing auto commit to
handle the commit size instead of reopening the connection all the tim
--
Rick Steinberger
Applications Architect
Alignex, Inc.
7200 Metro Boulevard
Minneapolis, MN 55439
Toll Free (866)378-6829 x347
Direct (952)224-5347
Mobile (612)834-2881
Hi,
Does FunctionQuery actually override the default similarity function? If
it does, how can I still access the similarity value?
Thank you.
-Original Message-
From: Otis Gospodnetic [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 25, 2007 4:42 PM
To: solr-user@lucene.apache.org
Subje
On 10/31/07, Victoria Kaganski <[EMAIL PROTECTED]> wrote:
> Does FunctionQuery actually override the default similarity function? If
> it does, how can I still access the similarity value?
FunctionQuery returns the *value* of a field (or a function of it) as
the value for a query - it does not use
> From: [EMAIL PROTECTED]> Subject: Re: Phrase Query Performance Question>
> Date: Tue, 30 Oct 2007 11:22:17 -0700> To: solr-user@lucene.apache.org> > On
> 30-Oct-07, at 6:09 AM, Yonik Seeley wrote:> > > On 10/30/07, Haishan Chen
> <[EMAIL PROTECTED]> wrote:> >> Thanks a lot for replying Yon
On 31-Oct-07, at 2:40 PM, Haishan Chen wrote:
http://mail-archives.apache.org/mod_mbox/lucene-java-user/
200512.mbox/[EMAIL PROTECTED]
It mentioned that http://websearch.archive.org/katrina/ (in nutch)
had 10M documents and a search of "hurricane katrina" was able to
return in 1.35 second
> From: [EMAIL PROTECTED]> Subject: Re: Phrase Query Performance Question>
> Date: Wed, 31 Oct 2007 15:25:42 -0700> To: solr-user@lucene.apache.org> > On
> 31-Oct-07, at 2:40 PM, Haishan Chen wrote:> > >> >
> http://mail-archives.apache.org/mod_mbox/lucene-java-user/ > >
> 200512.mbox/[EMAIL
"hurricane katrina" is a very expensive query against a collection
focused on Hurricane Katrina. There will be many matches in many
documents. If you want to measure worst-case, this is fine.
I'd try other things, like:
* ninth ward
* Ray Nagin
* Audubon Park
* Canal Street
* French Quarter
* FEM
: is it possible to have a CopyField with a functionquery as it's source?
: for instance :
:
:
:
: If not, I think this would make a nice addition.
FunctionQueries are inherently serach related ... shoehorning them to be
used at index time in a copyField would be very hard.
There has been
: So far this seems acceptable. Query performance seems fine when using
: the dynamic fields to sort result sets; indexing performance also
: seems fine*. That said, there are only 400K documents in the
: collection I'm working with, and few external rating sources at the
: moment (there are about
: If you went with the FunctionQuery approach for sorting by distance, would
: there be any way to use the output of the FunctionQuery to limit the
: documents to those within a certain radius? Or is it just for boosting
: documents, not for filtering?
FunctionQueries don't restrict the set of d
: Does anyone know of a way to have an index analyzer factory affect the
: contents of the actual data (versus the contents of the index)?
That's not possible in an Analyzer, but there has been some experimental
work in adding a special type of plugin for doing things like this...
http://wiki.
: Is there an easy to find out which version of solr is running. I installed
: solr 1.2 and set up an instance using Tomcat. It was successful before.
FYI: starting a while back, the "Info" page (registry.jsp) of the admin
interface gives you specifics on the Solr and Lucene versions in use.
: I would like to use solr facets with multi-word queries, is it possible
: I mainly implement a suggest application and use facet.prefix parameter,
: it works fine with single word but not with multiple words
it depends on your definition of "works" ? .. do you want each word to be
a seperate
: currently batch my updates in lots of 100 and between batches I close and
: reopen the "connection" to solr like so:
: private void closeConnection() {
: solrCore.close();
: solrCore = null;
: logger.debug("Closed solr connection");
: }
:
: Does anyone have any
if u rebuild solr , safe method is rm -r *tomcat*/webapps/*.
2007/11/1, Chris Hostetter <[EMAIL PROTECTED]>:
>
>
> : Is there an easy to find out which version of solr is running. I
> installed
> : solr 1.2 and set up an instance using Tomcat. It was successful before.
>
> FYI: starting a while b
: I would think you would see better performance by allowing auto commit
: to handle the commit size instead of reopening the connection all the
: time.
if your goal is "fast" indexing, don't use autoCommit at all ... just
index everything, and don't commit until you are completely done.
auto
: ("auto repair") 100384 hits 946 ms(auto repair) 100384 hits 31ms("car
: repair"~100) 112183 hits 766 ms(car repair) 112183 hits 63
: ms("business service"~100) 1209751 hits 1500 ms(business service)
: 1209751 hits 234 ms("shopping center"~100) 119481 hits 359
: ms(shopping c
: how can I permanently change the loglevel of SOLR output to WARNING? I
FYI: I've added a FAQ about this, but it probably won't be of much
immediate help to you ... you need to consult the Jetty docs to see how to
change the JDK Logging Level for each app
(unless someone would care to upate t
> Date: Wed, 31 Oct 2007 19:19:07 -0700> From: [EMAIL PROTECTED]> To:
> solr-user@lucene.apache.org> Subject: RE: Phrase Query Performance Question>
> > > : ("auto repair") 100384 hits 946 ms(auto repair) 100384 hits 31ms("car >
> : repair"~100) 112183 hits 766 ms(car repair) 112183 hits
> Date: Wed, 31 Oct 2007 17:54:53 -0700> Subject: Re: Phrase Query Performance
> Question> From: [EMAIL PROTECTED]> To: solr-user@lucene.apache.org> >
> "hurricane katrina" is a very expensive query against a collection> focused
> on Hurricane Katrina. There will be many matches in many> doc
27 matches
Mail list logo