Hi,
I am using dismaxrequesthandler to boost by query on the basis of fields.
There are 5 indexes which contain the search string.
Field names which have this Searchcriteria are:
- statusName_s
- listOf_author
- prdMainTitle_s
- productDescription_s
- productURL_s
my query string is :
?q=Shahr
On Oct 4, 2008, at 4:24 AM, prerna07 wrote:
I am using dismaxrequesthandler to boost by query on the basis of
fields.
There are 5 indexes which contain the search string.
Field names which have this Searchcriteria are:
- statusName_s
- listOf_author
- prdMainTitle_s
- productDescription_s
- p
All these fields are dynamic fields hence we dont know names of all the
fields also the number of dynamic fields is large, and we want to search for
all these dynamic fields.
Is there any other way of query field boosting ?
prerna07 wrote:
>
> Hi,
>
> I am using dismaxrequesthandler to boost
Currently there is not a way to specify wildcards or "all" fields in a
qf parameter. However, if the goal is to make a bunch of dynamic
fields searchable, but without individual boosts, use copyField to
merge all of your desired dynamic fields into a single searchable one.
Erik
On
5 minutes for only one update is slow.
On Fri, Oct 3, 2008 at 8:13 PM, Fuad Efendi <[EMAIL PROTECTED]> wrote:
> Hi Uwe,
>
> 5 minutes is not slow; commit can't be realtime... I do commit&optimize
> once a day at 3:00AM. It takes 15-20 minutes, but I have several millions
> daily updates...
>
>
>
Thanks Mike
The use of fsync() might be the answer to my problem, because I have
installed Solr for lack of other possibilities in a zone on Solaris with ZFS
which slows down when many fsync() calls are made. This will be fixed in a
upcoming release of Solaris, but I will move as soon as possible
Hmm OK that seems like a possible explanation then. Still it's spooky
that it's taking 5 minutes. How many files are in the index at the
time you call commit?
I wonder if you were to simply pause for say 30 seconds, before
issuing the commit, whether you'd then see the commit go faster?
There are around 35.000 files in the index. When I started Indexing 5 weeks
ago with only 2000 documents I did not this issue. I have seen it the first
time with around 10.000 documents.
Before that I have been using the same instance on a Linux machine with up
to 17.000 documents and I haven't se
Yikes! That's way too many files. Have you changed mergeFactor? Or
implemented a custom DeletionPolicy or MergePolicy?
Or... does anyone know of something else in Solr's configuration that
could lead to such an insane number of files?
Mike
Uwe Klosa wrote:
There are around 35.000 fil
Oh, you meant index files. I misunderstood your question. Sorry, now that I
read it again I see what you meant. There are only 136 index files. So no
problem there.
Uwe
On Sat, Oct 4, 2008 at 1:59 PM, Michael McCandless <
[EMAIL PROTECTED]> wrote:
>
> Yikes! That's way too many files. Have you
Oh OK, phew. I misunderstood your answer too!
So it seems like fsync with ZFS can be very slow?
Mike
Uwe Klosa wrote:
Oh, you meant index files. I misunderstood your question. Sorry, now
that I
read it again I see what you meant. There are only 136 index files.
So no
problem there.
Uwe
Thanks grant and ryan, so far so good. But I am confused about one thing -
when I set this up like:
public void process(ResponseBuilder rb) throws IOException {
And put it as the last-component on a distributed search (a defaults shard
is defined in the solrconfig for the handler), the componen
On Fri, Oct 3, 2008 at 2:28 PM, Michael McCandless
<[EMAIL PROTECTED]> wrote:
> Yonik, when Solr commits what does it actually do?
Less than it used to (Solr now uses Lucene to handle deletes).
A solr-level commit closes the IndexWriter, calls some configured
callbacks, opens a new IndexSearcher,
Ben, see also
http://www.nabble.com/Commit-in-solr-1.3-can-take-up-to-5-minutes-td19802781.html#a19802781
What type of physical drive is this and what interface is used (SATA, etc)?
What is the filesystem (NTFS)?
Did you add to an existing index from an older version of Solr, or
start from scrat
On Sat, Oct 4, 2008 at 9:35 AM, Michael McCandless
<[EMAIL PROTECTED]> wrote:
> So it seems like fsync with ZFS can be very slow?
The other user that appears to have a commit issue is on Win64.
http://www.nabble.com/*Very*-slow-Commit-after-upgrading-to-solr-1.3-td19720792.html#a19720792
-Yonik
A "Opening Server" is always happening directly after "start commit" with no
delay. But I can see many {commit=} with QTime around 280.000 (4 and a half
minutes)
One difference I could see to your logging is that I have waitFlush=true.
Could that have this impact?
Uwe
On Sat, Oct 4, 2008 at 4:36
Sorry for the extended question, but I am having trouble making
SearchComponent that can actually get at the returned response in a
distributed setup.
In my distributedProcess:
public int distributedProcess(ResponseBuilder rb) throws IOException {
How can I get at the returned results from a
On Sat, Oct 4, 2008 at 11:55 AM, Uwe Klosa <[EMAIL PROTECTED]> wrote:
> A "Opening Server" is always happening directly after "start commit" with no
> delay.
Ah, so it doesn't look like it's the close of the IndexWriter then!
When do you see the "end_commit_flush"?
Could you post everything in you
I'm not totally on top of how distributed components work, but check:
http://wiki.apache.org/solr/WritingDistributedSearchComponents
and:
https://issues.apache.org/jira/browse/SOLR-680
Do you want each of the shards to append values? or just the final
result? If appending the values is not
The issue I think is that process() is never called in my component, just
distributedProcess.
The server that hosts the component is a separate solr instance from the
shards, so my guess is process() is only called when that particular solr
instance has something to do with the index. distributedPr
20 matches
Mail list logo