Hossman, thank you for clearing that up. The reason I create a new searcher
for every search is that the index is frequently updated, and as far as I
could read the documentation, a searcher will not detect changes in the
index that occured after it was opened. I tried using just one searcher, but
: unless you *really* need to - and feel comfortable with the code, the example
: on EmbeddedSolr may lead to more trouble then it is worth.
:
: If possible, I'd suggest using solrj. If you need to work in an embedded
: environment, you get the same API (and can change later if needed)
: see:
:
Thanks Hoss,
I have specified the solr/home = c:\web\solr1, and now have updated the
at solrconfig.xml and got results.
LM
On 1/6/08, Chris Hostetter <[EMAIL PROTECTED]> wrote:
>
>
> : I have tried with solr1.xml and add a solr/home in that, but after that
> its
> : not showing any results, bec
1. I am using EmbeddedSolr and using example from here:
http://wiki.apache.org/solr/EmbeddedSolr
I just noticed that there is a note there saying that the page is out of
date, is that true and if yes is there an example that uses Solrj?
unless you *really* need to - and feel comfortable with
All the official (and soon to be official) releases are java 1.5+
If you really need a 1.4 version, you are sort of on your own... but
check an early version of SOLR-20 -- i think from july 2006
https://issues.apache.org/jira/browse/SOLR-20
ryan
Sean Laval wrote:
can anyone offer any advic
Got it. Smart.
Thx
On 1/6/08, Chris Hostetter <[EMAIL PROTECTED]> wrote:
>
> : number than the default one and i was wondering is there any disadvantage
> in
> : having a big number/ cache?BTW, where is the TTL controlled ?
>
> no disadvantage as long as you've got the RAM ... NOTE: the magic "512
Of course -- and now I feel silly for not having thought of that :-).
Thanks!
On Jan 6, 2008 4:37 PM, Walter Underwood <[EMAIL PROTECTED]> wrote:
> Field collapsing might work for you. I haven't looked at the details
> of the implementation and it is still in development, but it is the
> right so
Field collapsing might work for you. I haven't looked at the details
of the implementation and it is still in development, but it is the
right sort of feature. You'd like to see the top N matches for
each value of the author field, right?
wunder
On 1/6/08 3:25 PM, "Charles Hornberger" <[EMAIL PRO
I've got a problem that I'm not quite sure how to solve and am wondering if
anyone has any insight or similar experience to share.
Here's the situation: Documents in our Solr index include a field
identifying their author (we have 1000s of authors). When displaying an
individual document, we also
Maybe you can avoid reindexing the whole index then. If you add a synonym for
term "foo" then the only documents you really need to reindex are the ones that
contain that term "foo".
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: anuvenk <
: How do i boost a field (not a term) using the standard handler syntax? I
: know i can do that with the DisMax but I'm trying to keep myself in the
: standard one.Can this be done ?
there is no such thing as boosting a field ... only boosting a query.
when you specify something like "yakko^3" i
: number than the default one and i was wondering is there any disadvantage in
: having a big number/ cache?BTW, where is the TTL controlled ?
no disadvantage as long as you've got the RAM ... NOTE: the magic "512"
number you refered to isn't a "default" -- it's an "example" in the "example"
so
Thanks. a factor of 20 or even 30 from my numbers still gives a much larger
number than the default one and i was wondering is there any disadvantage in
having a big number/ cache?BTW, where is the TTL controlled ?
On Jan 6, 2008 7:23 AM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> On Jan 6, 2008 1
I log the search phrases the users on my site are using to search, and review
them regularly. Based on that i add synonyms for certain phrases to help
increase the relevant results. The reason i don't have the synonym filter at
index time is because i can't re-index whole/portion of data everytime
OK, the current solr dev version should now work correctly w.r.t. CSV.
I ended up rewriting the commons-csv escaping so that normal CSV
encapsulation can be handled, but also an escape can be specified for
standard backslash style escaping. Leading + trailing whitespace is
now also preserved, so o
"it is very easy to simultaneously shoot yourself in the
foot while tripping over all the rope
gives you to hang yourself with"
I'm writing that one down and posting it on my wall .
Erick
On Jan 6, 2008 3:25 AM, Chris Hostetter <[EMAIL PROTECTED]> wrote:
>
> : I haven't been able to get a prof
On Jan 6, 2008 12:59 AM, s d <[EMAIL PROTECTED]> wrote:
> What is the best approach to tune queryResultCache ?For example the default
> size is: size="512" but since a document id is just an int (it is an int,
> right?) ,i.e 4 bytes why not set size to 10,000,000 for example (it's only
> ~38Mb).
: 1. I am using EmbeddedSolr and using example from here:
: http://wiki.apache.org/solr/EmbeddedSolr
: I just noticed that there is a note there saying that the page is out of
: date, is that true and if yes is there an example that uses Solrj?
I think i added that note at some point when i notic
: I have tried with solr1.xml and add a solr/home in that, but after that its
: not showing any results, because its search by default in
: Tomcat\solr\data\index.
If i'm understanding you: you've got a tomcat context file named
solr1.xml, and in it you specify a JNDI value for "solr/home" ... w
: I haven't been able to get a profiler at the server yet, but I thought I
: might show how my code works, because it's quite different from the example
: in the link you provided...
i'm not even sure i really understand the orrigins of this thread, but
regardless of what the "main" topic is, re
20 matches
Mail list logo