You'll have to search the archives for a more complete explanation, I'm
going from memory here.. (or perhaps it's on the Wiki, I don't remember).
The notion is to break apart your timestamp (if you really, really need the
precision) into several fields rather than one. I.e. index the MMDD
as o
On Jan 6, 2009, at 9:17 PM, Jim Adams wrote:
Can someone explain what this means to me?
The below definition sets the timestamp field without time
granularity, just day. It's the difference between, say you've
indexed a document for every millisecond in a day (what is that,
86.4M?), an
On Wed, Jan 7, 2009 at 7:47 AM, Jim Adams wrote:
> Can someone explain what this means to me?
>
> I'm having a similar performance issue - it's an index with only 1 million
> records or so, but when trying to search on a date range it takes 30
> seconds! Yes, this date is one with hours, minutes
Can someone explain what this means to me?
I'm having a similar performance issue - it's an index with only 1 million
records or so, but when trying to search on a date range it takes 30
seconds! Yes, this date is one with hours, minutes, seconds in them -- do I
need to create an additional field
On 01.11.2008 06:10 Erik Hatcher wrote:
> Yeah, this should work fine:
>
> default="NOW/DAY" multiValued="false"/>
Wow, that was fast, thanks!
-Michael
On Nov 1, 2008, at 1:07 AM, Michael Lackhoff wrote:
On 31.10.2008 19:16 Chris Hostetter wrote:
forteh record, you don't need to index as a "StrField" to get this
benefit, you can still index using DateField you just need to round
your
dates to some less graunlar level .. if you always want
On 31.10.2008 19:16 Chris Hostetter wrote:
> forteh record, you don't need to index as a "StrField" to get this
> benefit, you can still index using DateField you just need to round your
> dates to some less graunlar level .. if you always want to round down, you
> don't even need to do the rou
We have implemented the suggested reduction in granularity by dropping
time altogether and simply disallowing time filtering. This, in light
of other search filters we have provided, should prove be sufficient
for our user base.
We did keep the fine granularity field not for filtering, but
: Concrete example, this query just look 18s:
:
: instance:client\-csm.symplicity.com AND dt:[2008-10-01T04:00:00Z TO
: 2008-10-30T03:59:59Z] AND label_facet:"Added to Position"
: I saw a thread from Apr 2008 which explains the problem being due to too much
: precision on the DateField type
I've also seen the suggestion (more from a pure Lucene perspective) of
breaking
apart your dates. Remember that the time/space issues are due to the number
of
terms. So it's possible (although I haven't tried it) to, index many fewer
distinct
terms. e.g. break your dates into some number of fields,
@lucene.apache.org
Subject: Re: date range query performance
Well, no - we don't care so much about the seconds, but hours &
minutes are indeed crucial.
---
Alok K. Dhir
Symplicity Corporation
www.symplicity.com
(703) 351-0200 x 8080
[EMAIL PROTECTED]
On Oct 29, 2008, at 4:41 PM, Chris Har
Well, no - we don't care so much about the seconds, but hours &
minutes are indeed crucial.
---
Alok K. Dhir
Symplicity Corporation
www.symplicity.com
(703) 351-0200 x 8080
[EMAIL PROTECTED]
On Oct 29, 2008, at 4:41 PM, Chris Harris wrote:
Do you need to search down to the minutes and seconds
Do you need to search down to the minutes and seconds level? If searching by
date provides sufficient granularity, for instance, you can normalize all
the time-of-day portions of the timestamps to midnight while indexing. (So
index any event happening on Oct 01, 2008 as 2008-10-01T00:00:00Z.) That
Hi -- using solr 1.3 -- roughly 11M docs on a 64 gig 8 core machine.
Fairly simple schema -- no large text fields, standard request
handler. 4 small facet fields.
The index is an event log -- a primary search/retrieval requirement is
date range queries.
A simple query without a date rang
14 matches
Mail list logo