Hi,
I have around 10 solr servers running indexes of around 80-85 GB each and
and with 16,000,000 docs each. When i use distrib for querying, I am not
getting a satisfactory response time. My response time is around 4-5
seconds. Any suggestions to improve the response time for queries (to bring
it
hi sunnyfr,
I wish to clarify something.
you say that the performance is poor "during" the replication.
I suspect that the performance is poor soon after the replication. The
reason being , replication is a low CPU activity. If you think
otherwise let me know how you found it out.
If the perf i
See also http://wiki.apache.org/solr/TermsComponent
You might be able to apply these patches to 1.3 and have them work,
but there is no guarantee. You also can get some termDocs like
capabilities through Solr's faceting capabilities, but I am not aware
of any way to get at the term vector
vivek,
404 from the URL you provided in the message! Similar URLs work
OK for me.
hmm try http://localhost:8080/solr/admin/cores?action=status and see
if that gives a 404.
Also are you running a nightly build or a svn checkout? Using tomcat?
Perhaps it should be
http://localhost:8080/apache-so
To combat our frequent OutOfMemory Exceptions, I'm attempting to come up
with a model so that we can determine how much memory to give Solr based
on how much data we have (as we expand to more data types eligible to be
supported this becomes more important).
Are there any published guidelines on h
Hi,
Any help on this. I've looked at DistributedSearch on Wiki, but that
doesn't seem to be working for me on multi-core and multiple Solr
instances on the same box.
Scenario,
1) Two boxes (localhost, 10.4.x.x)
2) Two Solr instances on each box (8080 and 8085 ports)
3) Two cores on each instan
The LukeRequest class gets me what I wanted. Thanks!
-Original Message-
From: Shalin Shekhar Mangar [mailto:shalinman...@gmail.com]
Sent: Friday, April 03, 2009 10:15 AM
To: solr-user@lucene.apache.org
Subject: Re: Remote Access To Schema Data
On 4/3/09, Erik Hatcher wrote:
>
> On Apr
Hi,
Sorry I can't find and issue, during my replication my respond time query
goes very slow.
I'm using replication handler, is there a way to slow down debit or ???
11G index size
8G ram
20 requests/sec
Java HotSpot(TM) 64-Bit Server VM
10.0-b22
Java HotSpot(TM) 64-Bit Server VM
4
-Xms4G
-
Hi
I would like to know if it use less memory to facet or put weight to a field
when I index it then when I make a dismax request.
Thanks,
--
View this message in context:
http://www.nabble.com/solr-1.4-indexation-or-request-%3E-memory-tp22913679p22913679.html
Sent from the Solr - User mail
Hi,
I've title description and tag field ... According to where I find the word
searched, I would like to boost differently other field like nb_views or
rating.
if word is find in title then nb_views^10 and rating^10
if word is find in description then nb_views^2 and rating^2
Thanks a lot for y
I want the functionality that Lucene IndexReader.termDocs gives me. That or
access on the document level to the term vector. This
(http://wiki.apache.org/solr/TermVectorComponent?highlight=(term)|(vector)
seems to suggest that this will be available in 1.4. Is there any way to do
this in 1.3?
So I've started making a QParserPlugin to handle phrase wild card
searches but I think I need a little bit of guidance. In my plugin I've
subclassed the SolrQueryParser and overridden the getFieldQuery(...)
method so that I can handle queries that contain spaces and wildcards.
I naively tried to c
My application is using the httpclient. I will have to replace this from
solrj client.
But do Solrj client supports passing query with double quotes in it.
like
?q="Glorious Revolution"&qt=dismaxrequest
Thanks,
Amit Garg
Shalin Shekhar Mangar wrote:
>
> On Mon, Apr 6, 2009 at 10:56 AM, dabbo
Hi Jacob,
Thanks for the reply. I am still trying to nail down this problem with the best
possible solution.
Yeah I had thought about these 2 approaches but both of them are gonna make my
indexing slower. Plus the fact that I will have atleast 5 rich text files
associated with each document is
Hi,
I am sending a query to the solr search engine from my application using
httpClient. I want to search for a specific title from the available.
for e.g. If user wants to search for the book which have titled "Complete
Java Reference", I am sending this query to Solr having double quotes with
Hi,
we're trying to apply the French Stemmer filter with the ISO Latin
Accent filter for our index, but unfortunately, we're having some bad
behaviors for some searches. After many tries, I've found out that the
French Stemmer (or Snowball with language = "french") seems to be too
sensitive t
Shalin Shekhar Mangar wrote:
On Mon, Apr 6, 2009 at 1:52 PM, Veselin K wrote:
I'd like to copy the "text" field to a field called "preview" and
then limit the "preview" field to just a few lines of text (or number of
terms).
Then I could configure retrieving the "preview" field instead of "
Hmmm,
Not sure how this all hangs together. But editing my solrconfig.xml as follows
sorted the problem:-
to
Also, my initial report of the issue was misled by the log messages. The mention
of "oceania.pdf" refers to a previous successful tika extract. There no mention
of the filena
you may try to put true in that useCompoundFile entry; this way indexing
should use far less file descriptors, but it will slow down indexing, see
http://issues.apache.org/jira/browse/LUCENE-888.
Try to see if the reason of lack of descriptors is related only on solr. How
are you using indexing, by
try ulimit -n5 or something
On Mon, Apr 6, 2009 at 6:28 PM, Jarek Zgoda wrote:
> I'm indexing a set of 50 small documents. I'm adding documents in
> batches of 1000. At the beginning I had a setup that optimized the index
> each 1 documents, but quickly I had to optimize after adding
I'm indexing a set of 50 small documents. I'm adding documents in
batches of 1000. At the beginning I had a setup that optimized the
index each 1 documents, but quickly I had to optimize after adding
each batch of documents. Unfortunately, I'm still getting the "Too
many open files"
Hi,
Apologies if this question has been answered already, I'm so new to Solr
(literally a few hours using it) that I still find some of the answers a bit
obscure.
I got Apache Solr working for a Drupal install, I must implement ASAP a
custom order that is fairly simple: there is a list of venue
>From Hossman:
"index time field boosts are a way to express things like 'this documents
title is worth twice as much as the title of most documents'. Query time
boosts are a way to express 'I care about matches on this clause of my query
twice as much as I do about matches to other clauses of my
Hi,
Can I have the dynamic field in copyField as follows,
Can anyone tell me please how to make the dynamic field to be available in
one field "all" ?
On Mon, Apr 6, 2009 at 4:38 PM, Andrew McCombe wrote:
>
> Just started using Solr/Lucene and am getting to grips with it. Great
> product!
Welcome to Solr!
> What is the QTime a measure of? is it milliseconds, seconds? I tried a
> Google search but couldn't fins anything definitive.
>
QTi
Hi
Just started using Solr/Lucene and am getting to grips with it. Great
product!
What is the QTime a measure of? is it milliseconds, seconds? I tried a
Google search but couldn't fins anything definitive.
Thanks In Advance
Andrew McCombe
maxBufferedDocs is deprecated, better use ramBufferSizeMB. In case you have
both specified, the more restrictive will be the one that will be used.
You can remove the config of indexDefaults if you have your index
configuration in mainIndex.
Gargate, Siddharth wrote:
>
> I see two entries of m
the only issue you may have will be related to software that writes files in
solr-home, but the only one I can think of is dataimport.properties of DIH,
so if you use DIH, you may want to make dataimport.properties location to be
configurable dinamically, like an entry in data-config.xml, otherwise
I see two entries of maxBufferedDocs property in solrconfig.xml. One in
indexDefaults tag and other in mainIndex tag commented as Deprecated. So
is this property required and gets used? What if remove the
indexDefaults tag altogether?
Thanks,
Siddharth
>Good Morning,
>
>Is there any way to specify or debug a specific DIH configuration via the
>API/http request?
>
>I have the following:
>
>
>dih_pc_default_feed.xml
>
>
>dih_pc_cms_article_feed.xml
>
>
>dih_pc_local_event_feed.xml
>
>
>For example, is there any to specific only the "pc_local_event"
Hello Paul,
I'm indexing with "curl http://localhost... -F myfi...@file.pdf"
Regards,
Veselin K
On Mon, Apr 06, 2009 at 02:56:20PM +0530, Noble Paul ?
?? wrote:
> how are you indexing?
>
> On Mon, Apr 6, 2009 at 2:54 PM, Veselin Kantsev
> wrote:
> > Hello
Thank you very much Shalin.
Regards,
Veselin K
On Mon, Apr 06, 2009 at 02:19:05PM +0530, Shalin Shekhar Mangar wrote:
> On Mon, Apr 6, 2009 at 1:52 PM, Veselin K wrote:
>
> >
> > I'd like to copy the "text" field to a field called "preview" and
> > then limit the "preview" field to just a few l
how are you indexing?
On Mon, Apr 6, 2009 at 2:54 PM, Veselin Kantsev
wrote:
> Hello,
> apologies for the basic question.
>
> How can I avoid double indexing files?
>
> In case all my files are in one folder which is scanned frequently, is
> there a Solr feature of checking and skipping a file if
Hello,
apologies for the basic question.
How can I avoid double indexing files?
In case all my files are in one folder which is scanned frequently, is
there a Solr feature of checking and skipping a file if it has already been
indexed
and not changed since?
Thank you.
Regards,
Veselin K
There is a debug mode
http://wiki.apache.org/solr/DataImportHandler#head-0b0ff832aa29f5ba39c22b99603996e8a2f2d801
On Mon, Apr 6, 2009 at 2:35 PM, Wesley Small wrote:
> Good Morning,
>
> Is there any way to specify or debug a specific DIH configuration via the
> API/http request?
>
> I have the fo
On Mon, Apr 6, 2009 at 2:35 PM, Wesley Small wrote:
>
> Is there any way to specify or debug a specific DIH configuration via the
> API/http request?
>
> I have the following:
>
>
> dih_pc_default_feed.xml
>
>
> dih_pc_cms_article_feed.xml
>
>
> dih_pc_local_event_feed.xml
>
>
That is not a
Good Morning,
Is there any way to specify or debug a specific DIH configuration via the
API/http request?
I have the following:
dih_pc_default_feed.xml
dih_pc_cms_article_feed.xml
dih_pc_local_event_feed.xml
For example, is there any to specific only the "pc_local_event" be process
(impor
On Mon, Apr 6, 2009 at 1:52 PM, Veselin K wrote:
>
> I'd like to copy the "text" field to a field called "preview" and
> then limit the "preview" field to just a few lines of text (or number of
> terms).
>
> Then I could configure retrieving the "preview" field instead of "text"
> upon search.
>
>
Hey there,
Don't know if I shoud ask this in here or in Lucene Users forum...
I have a doubt with field boosting (I am using dismax). I use document
boosting at index time to give more importance to some documents. At this
point, I don't care about the matching, just want to tell solr/lucene that
Hello,
I'm trying to tune my Solr installation, specifically the search
results.
At present, my search queries return some standard fields like filename,
filepath and text of the matching file.
However the text field contains the full contents of the file, which is
not very efficient in my case.
On Mon, Apr 6, 2009 at 10:56 AM, dabboo wrote:
>
> I want to pass double quotes to my solr from the front end, so that it can
> return the specific results of that particular phrase which is there in
> double quotes.
>
> If I use httpClient, it doesnt allow me to send the query in this format.
>
41 matches
Mail list logo