On Wed, Jan 7, 2009 at 7:47 AM, Jim Adams wrote:
> Can someone explain what this means to me?
>
> I'm having a similar performance issue - it's an index with only 1 million
> records or so, but when trying to search on a date range it takes 30
> seconds! Yes, this date is one with hours, minutes
Photo objects? Is it binary data you are trying to send in an XML request?
On Wed, Jan 7, 2009 at 12:57 AM, Tushar_Gandhi <
tushar_gan...@neovasolutions.com> wrote:
>
> Hi,
> I am getting an error whenever I am going to index specifically photo
> objects.
> For other objects it is working.
>
>
On Wed, Jan 7, 2009 at 8:27 AM, Jim Adams wrote:
> Why is NFS mounting such a bad idea? Some solutions for high available
> disks
> suggest that you DO mount the disks NFS to the boxes that need the data.
>
>
Network requests for each read/write? You can do some benchmarks yourself
and if you fin
You'll need to re-index.
On Wed, Jan 7, 2009 at 9:49 AM, Jim Adams wrote:
> It's a range query. I don't have any faceted data.
>
> Can I limit the precision of the existing field, or must I re-index?
>
> Thanks.
>
> On Tue, Jan 6, 2009 at 8:41 PM, Yonik Seeley wrote:
>
> > On Tue, Jan 6, 2009
Which approach worked? I suggested three
Jetty automatically loads jars in WEB-INF/lib
it is the responsibility of Solr to load jars from solr.ome/lib
it is the responsibility of the JRE to load jars from JAVA_HOME/lib/ext
On Tue, Jan 6, 2009 at 6:18 PM, Performance wrote:
>
> Paul,
>
> Thanks fo
It's a range query. I don't have any faceted data.
Can I limit the precision of the existing field, or must I re-index?
Thanks.
On Tue, Jan 6, 2009 at 8:41 PM, Yonik Seeley wrote:
> On Tue, Jan 6, 2009 at 10:06 PM, Jim Adams wrote:
> > Are there any particular suggestions on memory size for
the root node is not
it should be
On Wed, Jan 7, 2009 at 4:23 AM, The Flight Captain
wrote:
>
> If add the tag and an entity, I still get the same error when
> starting up JBoss.
>
> Here is my full data-config.xml
>
>
>driver="oracle.jdbc.OracleDriver"
> url="jdbc:or
Paging is possible w/ XPathEntityProcessor
look at the $hasMore and $nextUrl in the documentation
if you can explain better and I may be able to give a better solution
eg: where is the metadata coming from and what is the datasource
On Tue, Jan 6, 2009 at 11:17 PM, Jon Baer wrote:
> Hi,
>
> Anyon
On Tue, Jan 6, 2009 at 10:06 PM, Jim Adams wrote:
> Are there any particular suggestions on memory size for a machine? I have a
> box that has only 1 million records on it - yet I'm finding that date
> searches are already unacceptable (30 seconds) slow. Other searches seem
> okay though.
I ass
Are there any particular suggestions on memory size for a machine? I have a
box that has only 1 million records on it - yet I'm finding that date
searches are already unacceptable (30 seconds) slow. Other searches seem
okay though.
Thanks!
On Thu, Dec 18, 2008 at 2:02 PM, Yonik Seeley wrote:
Why is NFS mounting such a bad idea? Some solutions for high available disks
suggest that you DO mount the disks NFS to the boxes that need the data.
On Mon, Dec 29, 2008 at 7:42 PM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:
> What you have below is not really what we call "Distribute
Can someone explain what this means to me?
I'm having a similar performance issue - it's an index with only 1 million
records or so, but when trying to search on a date range it takes 30
seconds! Yes, this date is one with hours, minutes, seconds in them -- do I
need to create an additional field
So did anyone put together a FAQ on this subject? I am also interested in
seeing the different ways to get dynamic faceting to work.
In this post, Chris Hostetter dropped a piece of handler code. Is it still
the right path to take for those generated ranges:
$0..$20 (3)
$20..$75 (15)
$75..$123 (8)
What's the XML you're sending it? It's got something invalid in it,
obviously.
How are you indexing? Via SolrJ? Or some other POST way?
Erik
On Jan 6, 2009, at 2:27 PM, Tushar_Gandhi wrote:
Hi,
I am getting an error whenever I am going to index specifically
photo
objects.
F
If add the tag and an entity, I still get the same error when
starting up JBoss.
Here is my full data-config.xml
I also have this field one field in my schema.xml nested under
When I restart Jboss I get the same stacktrace.
...
2009-01-07
Thanks, this fixed the problem. Maybe this parameter could be added to the
standard request handler in the sample solrconfig.xml, as it is confusing
that it uses the default request handler's defType even when not using that
handler. I didn't completely understand your explanation, though. Thanks f
On Tue, Jan 6, 2009 at 5:01 PM, Mark Ferguson wrote:
> It seems that the problem is related to the defType parameter. When I
> specify defType=, it uses the correct request handler. It seems that it is
> using the correct request handler, but it is defaulting to defType=dismax,
> even though I hav
Lower your mergeFactor and Lucene will merge segments(i.e. fewer index files)
and purge deletes more often for you at the expense of somewhat slower indexing.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
> From: wojtekpia
> To: solr-user@lucen
I apologize, entering the defType parameter explicitly has nothing to do
with it, this was a caching issue. I tested the different configurations
thoroughly, and this is what I've come up with:
- When using 'dismax' request handler as default:
- Queries are always parsed using the dismax par
I'm optimizing because I thought I should. I'll be updating my index
somewhere between every 15 minutes, and every 2 hours. That means between 12
and 96 updates per day. That seems like a lot of index files (and it scared
me a little), so that's my second reason for wanting to optimize nightly.
I
It seems that the problem is related to the defType parameter. When I
specify defType=, it uses the correct request handler. It seems that it is
using the correct request handler, but it is defaulting to defType=dismax,
even though I have not specified that parameter in the standard request
handler
Hi,
In my solrconfig.xml file there are two request handlers configured: one
uses defType=dismax, and the other doesn't. However, it seems that when the
dismax request handler is set as my default, I have no way of using the
standard request handler . Here is the relevant part of my solrconfig.xml
Kind of a side-note, but I think it may be worth your while.
If your queryResultCache hit rate is 65%, consider putting a reverse
proxy in front of Solr. It can give performance boosts over the query
cache in Solr, as it doesn't have to pay the cost of reformulating the
response. I've used Varnish
OK, so that question/answer seems to have hit the nail on the head. :)
When you optimize your index, all index files get rewritten. This means that
everything that the OS cached up to that point goes out the window and the OS
has to slowly re-cache the hot parts of the index. If you don't opt
I use my warm up queries to fill the field cache (or at least that's the
idea). My filterCache hit rate is ~99% & queryResultCache is ~65%.
I update my index several times a day with no 'optimize', and performance is
seemless. I also update my index once nightly with an 'optimize', and that's
wh
Is autowarm count of 0 a good idea, though?
If you don't want to autowarm any caches, doesn't that imply that you have very
low hit rate and therefore don't care to autowarm? And if you have a very low
hit rate, then perhaps caches are not needed at all?
How about this. Do you optimize your i
Thanks Yonik!
I still may investigate the query function stuff that was discussed, as
Hoss indicated it may hold value.
-Todd Feak
-Original Message-
From: Yonik Seeley [mailto:ysee...@gmail.com]
Sent: Tuesday, January 06, 2009 10:19 AM
To: solr-user@lucene.apache.org
Subject: Re: Using
On Tue, Jan 6, 2009 at 1:05 PM, Feak, Todd wrote:
> I'm not sure I followed all that Yonik.
>
> Are you saying that I can achieve this affect now with a bq setting in
> my DisMax query instead of via a bf setting?
Yep, a "const" QParser would enable that.
bq={!const}foo:bar
-Yonik
Hi,
Anyone have a quick, clever way of dealing w/ paged XML for
DataImportHandler? I have metadata like this:
1
3
15
I unfortunately can not get all the data in one shot so I need to
maybe a number of requests obtained from the paging me
On Tue, Jan 6, 2009 at 10:41 AM, Feak, Todd wrote:
> The boost queries are true queries, so the amount boost can be affected
> by things like term frequency for the query.
Sounds like a constant score query is a general way to do this.
Possible QParser syntax:
{!const}tag:FOO OR tag:BAR
Could b
Sorry, I forgot to include that. All my autowarmcount's are set to 0.
Feak, Todd wrote:
>
> First suspect would be Filter Cache settings and Query Cache settings.
>
> If they are auto-warming at all, then there is a definite difference
> between the first start behavior and the post-commit beh
I'm not sure I followed all that Yonik.
Are you saying that I can achieve this affect now with a bq setting in
my DisMax query instead of via a bf setting?
-Todd Feak
-Original Message-
From: Yonik Seeley [mailto:ysee...@gmail.com]
Sent: Tuesday, January 06, 2009 9:46 AM
To: solr-user@l
First suspect would be Filter Cache settings and Query Cache settings.
If they are auto-warming at all, then there is a definite difference
between the first start behavior and the post-commit behavior. This
affects what's in memory, caches, etc.
-Todd Feak
-Original Message-
From: wojte
I'm running load tests against my Solr instance. I find that it typically
takes ~10 minutes for my Solr setup to "warm-up" while I throw my test
queries at it. Also, I have the same two warm-up queries specified for the
firstSearcher and newSearcher event listeners.
I'm now benchmarking the affe
For date range search, use <>:[date1 T23:59:59Z TO date2
T23:59:59Z].
Thanks,
Sourabh
Gavin-39 wrote:
>
> Hi,
> Can some one tell me how I can achieve date range searches? For
> instance if I save the DOB as a solr date field how can I do a search to
> get the people,
> 1. Who are
Thanks.
When i give the uriencoding=utf8 in tomcat's server.xml file some of the
special chars are indexed and searchable ,while others are not.
eg: Bernhard Schölkopf ,János Kornai
These are indexed and searchable after the above change.On the browser
however some others display as junk chars
:It should be fairly predictible, can you elaborate on what problems you
:have just adding boost queries for the specific types?
The boost queries are true queries, so the amount boost can be affected
by things like term frequency for the query. The functions aren't
affected by this and therefore
Hi,
I use the DIH with RDBMS for indexing a large mysql database with
about 7 mill. entries.
Full index is working fine, in schema.xml I implemented a uniqueKey
field (which is of the type 'text').
I start queries with the dismax query handler, and get my results as
an php array.
Now, s
Paul,
Thanks for the feedback and it does work. So if I understand this the app
server code (Jetty) is not reading in the environment variables for the
other libraries I need. How do I add the JDBC files to the path so that I
don't need to copy the files into the directory? Does jetty have a c
Hi Bhawani'
Ur Query should be *:*.Try this and have fun.!
Bhawani Sharma wrote:
>
> Hi All,
>
> I want to fetch all the data from database.
> so what my Solr query should be to get all documents from database?
> like in mysql syntex is : SELECT * FROM table;
> so what will be the syntex
Kumar's advice is sound. You must make sure you are actually indexing the
special symbols.
To make a query with special characters you must make sure you urlencode the
parameters before sending them to Solr.
There are some symbols which have a special meaning in the lucene query
syntax are '+', '
On Tue, Jan 6, 2009 at 3:09 PM, Bhawani Sharma wrote:
>
> Hi All,
>
> I want to fetch all the data from database.
> so what my Solr query should be to get all documents from database?
> like in mysql syntex is : SELECT * FROM table;
> so what will be the syntex of this query in solr ?
If you me
Filtering of special characters depends on the filters you use for the
fields in your schema.xml.
If you are using WordDelimiterFilterFactory in your analyzer then the
special characters get removed during the processing of your field. But
the WordDelimiterFilterFactory does a lot of other things
Hi,
The query
1. NOT(IBA60019_l:1) AND NOT(IBA60019_l:0) AND
businessType:wt.doc.WTDocument
works
But below query does not work
2. (NOT(IBA60019_l:1) AND NOT(IBA60019_l:0)) AND
businessType:wt.doc.WTDocument
Query number 1 shows the records but Query number 2 do
Hi All,
I want to fetch all the data from database.
so what my Solr query should be to get all documents from database?
like in mysql syntex is : SELECT * FROM table;
so what will be the syntex of this query in solr ?
Please reply ASAP.
Thanks in Advance.
Thanks:
Bhawani Sharma
--
View this me
Hi,
I would like to query terms containing special chars .
Regards
Sujatha
On Tue, Jan 6, 2009 at 2:59 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> You forgot to tell us what do you want to do with special characters?
>
> 1. Remove them from the documents while indexing?
> 2.
You forgot to tell us what do you want to do with special characters?
1. Remove them from the documents while indexing?
2. Don't remove them while indexing?
3. Query with terms containing a special character?
On Tue, Jan 6, 2009 at 2:55 PM, Sujatha Arun wrote:
> Hi,
>
> Can anyone point me to t
Hi,
Can anyone point me to the thread if it exists on indexing special
characters in solr.
Regards
Sujatha
48 matches
Mail list logo