Although field collpasing worked fine in my brief testing,
when I put it to work with more documents, I start to get
exceptions. It seems to have something to do with the queries
(or documents, since different queries return different
documents). With some queries, this exception does not happen.
at had one of these keywords would match it if it
> were used in the query...
> If there is no list here and you're just indexing all the content of
> all these sites... isn't that what Nutch is designed for?
> --
> Steve
> On Jun 18, 2008, at 11:05 PM, JLIST wrote:
Hi all,
This is what I'm trying to do: since some sources (say,
some web sites) are more authoritative than other sources
on certain subjects, I'd like to promote those sites when
the query contains certain keywords. I'm not sure what
is the best way to implement this. I suppose I can index
the ke
Sounds like web designer's fault. No permission check and no
confirmation for deletion?
> Never, never delete with a GET. The Ultraseek spider deleted 20K
> docments on an intranet once because they gave it admin perms and
> it followed the "delete this page" link on every page.
GET makes it possible to delete from a browser address bar,
which you can not do with DELETE :)
> As for POST vs. GET - don't let REST purists hear you. :)
> Actually, isn't there a DELETE HTTP method that REST purists
> would say should be used in case of doc deletion?
> That looks right. CollapseComponent replaces QueryComponent.
Does it mean that if collapse parameters show up in the URL,
CollapseComponent will be used automatically, and if not,
QueryComponent will be used? Or is it always going to be
CollapseComponent, which defaults to QueryComponent's beh
I had the patch problem but I manually created that file and
solr nightly builds fine.
After replacing solr.war with apache-solr-solrj-1.3-dev.jar,
in solrconfig.xml, I added this:
Then added this to the standard and dismax handler handler
collapse
I added &collapse.field=&collaps
It seems that the web interface only supports select but not delete.
Is it possible to do delete from the browser? It would be nice to be
able to do delete and commit, and even post (put XML in an html form)
from the admin web interface :)
Also, does delete have to be a POST? A GET should do.
Hmm... I tried it with a Windows native port of patch, cygwin patch
and also on Linux and got the same error.
Is this, by any chance, going to be in solr 1.3 soon?
Thanks,
Jack
> That looks like the correct way to apply the patch. I tried it and it worked
> for me.
> Otis
> - Original Me
Hello Otis,
https://issues.apache.org/jira/browse/SOLR-236 has links for
a lot of files. I figure this is what I need:
10. solr-236.patch (24 kb)
So I downloaded the patch file, and also downloaded 2008/06/16
nightly build, then I ran this, and got an error:
$ patch -p0 -i solr-236.patch --dry-r
Hello Chris,
> : If this is how it works, it sounds like the bq will be used first
> : to get a result set, then the result set will be sorted by q
> : (relevance)?
> no. bq doesn't influence what matches -- that's q -- bq only influence
> the scores of existing matches if they also match the bq
Hello Chris,
> it sounds like you only attempted tweaking the boost value, and not
> tweaking the function params ... you can change the curve so that really
> new things get a large score increase, but older things get less of an
> increase.
recip(rord(creationDate),1,a,b)^w
I was tweaking the
Hello Otis,
Could you be a bit more specific or point me to some documentation
pages? Can this be done through modifying schema and solrconfig or
does it involve some coding? This sounds like a generic problem to me
so I'm hoping to find a generic solution.
Thanks,
Jack
Tuesday, May 20, 2008, 9:
I'm indexing pages from multiple domains. In any given
result set, I don't want to return more than two links
from the same domain, so that the first few pages won't
be all from the same domain. I suppose I could get more
(say, 100) pages from solr, then sort in memory in the
front-end server to mi
Otis
> --
> Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
> - Original Message
>> From: JLIST <[EMAIL PROTECTED]>
>> To: solr-user@lucene.apache.org
>> Sent: Tuesday, May 13, 2008 5:42:38 AM
>> Subject: the time factor
>>
>> Hi
Hi,
I'm indexing news articles from a few news feeds.
With news, there's the factor of relevance and also the
factor of freshness. Relevance-only results are not satisfactory.
Sorting on feed update time is not satisfactory, either,
because one source may update more frequently than the
others and
Thanks Otis. The schema.xml actually explains it very well!
> A good place to look is the Wiki. Look for "Analyzer" substring on the main
> Solr wiki page.
>> I must be overlooking ... where can I find definitions of
>> the built-in types such as textTight, text_ws, etc?
I must be overlooking ... where can I find definitions of
the built-in types such as textTight, text_ws, etc?
gt; http://domain/articles/2008/* to find docs with that URL prefix.
> ----- Original Message
>> From: JLIST <[EMAIL PROTECTED]>
>> To: solr-user@lucene.apache.org
>> Sent: Saturday, May 3, 2008 9:59:47 PM
>> Subject: startsWith?
>>
>> Hi, I wonder
Hi, I wonder it's possible search for text/string fields that starts
with a substring, similar to Java's startsWith function? For example,
if I have a URL indexed as text or string field, can I find URLs that
starts with "http://domain/articles/2008/"; ?
If not, what's the best way to implement a
This is an old problem that has been reported before.
After solr running for a while, /solr/admin becomes
unavailable while search still works. I'm using embedded
jetty. This is solr 1.2. Any chance this has been fixed
in the development branch?
Hi all,
Just want to confirm that when dismax request handler is used,
fields in "q" is not supported? I'm asking because this query
gives one result:
http://localhost:8983/solr/select/?q=id%3A2023706&version=2.2&start=0&rows=10&indent=on
while this gives 0, the only difference being the qt argume
22 matches
Mail list logo