PM, ufuk yılmaz wrote:
>
> Hello all,
>
> I have a plong field in my schema representing a Unix timestamp
>
>
>
> I’m doing a range facet over this field to find which event occured on which
> day. I’m setting “start” on some date at 00:00 o’clock, end on another, and
Hello all,
I have a plong field in my schema representing a Unix timestamp
I’m doing a range facet over this field to find which event occured on which
day. I’m setting “start” on some date at 00:00 o’clock, end on another, and
setting gap to 86400 (total seconds in a day)
...
"type&quo
Hi Antony,
I don't know a ton about DIH, so I can't answer your question myself.
But you might have better luck getting an answer from others if you
include more information about the behavior you're curious about.
Where do you see this Last Modified timestamp (in the Solr ad
Hello Solr Users,
I am trying to figure out if there was a reason for "Last Modified: about
20 hours ago" remaining unchanged after a full data import into solr. I am
running solr cloud on 7.2.1.
I do see this value and also the numDocs value change on a Delta import.
Thanks,
Antony
Timestamp of what exactly?
If it is the general server timestamp, I think it would usually be
part of the HTTP Headers of the response.
If you are talking about the record creation date, you can set NOW as
a default field value for a date field.
If you are doing some timestamp math in the query
On 8/16/2018 11:27 PM, Midas A wrote:
Hi,
i my use case i want to get current timestamp in response of solr query.
how can i do it . is it doable ?
I don't think you can. There MIGHT be a function query that can do it,
but it's not something I'm aware of without research
Hi,
i my use case i want to get current timestamp in response of solr query.
how can i do it . is it doable ?
Regards,
Midas
>Tree in the admin UI, which means that there is a way to
> obtain the information with an HTTP API.
>
> When cores are created or manipulated by API calls, the core.properties
> file will have a comment with a timestamp of the last time Solr
> wrote/changed the file. CoreAdmin operation
n with an HTTP API.
When cores are created or manipulated by API calls, the core.properties
file will have a comment with a timestamp of the last time Solr
wrote/changed the file. CoreAdmin operations like CREATE, SWAP, RENAME,
and others will update or create the timestamp in that comment, but
t; I am working on a developing a utility which lets one monitor the
>> indexPipeline Status.
>> The indexing job runs in two forms where either it -
>> 1. Creates a new core OR
>> 2. Runs the delta on existing core.
>> To put down to simplest form I look into the DB timest
timestamp when the indexing
job was triggered and have a desire to read some stat / metric from Solr
(preferably an API) which reports a timestamp when the CORE was created /
modified.
My utility completely relies on the difference between timestamps from DB &
Solr as these two timestamps are leverage
Hi,
I am working on a developing a utility which lets one monitor the
indexPipeline Status.
The indexing job runs in two forms where either it -
1. Creates a new core OR
2. Runs the delta on existing core.
To put down to simplest form I look into the DB timestamp when the indexing
job was
= [1, 2, 5]. And Group B also
has three session_timings = [1, 6, 7]. If current timestamp is 3, then Group
A should come first because next session for Group A is on 5, whereas for
Group B its 7. Is this possible with solr sorting? Or do I have to use
another way to do this? Any help would be great
dataconfig.xml
query="SELECT test_data1,test_data2,test_data3, upserttime from
test_table"
autoCommit="true">
The timestamp data in Cassandra for example - 2017-09-20
10:25:46.752000+
Regards,
Shankha
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Checkout this article for working with date types and format etc.
http://lucene.apache.org/solr/guide/6_6/working-with-dates.html
On Wed, Sep 20, 2017 at 6:32 AM, shankhamajumdar <
shankha.majum...@lexmark.com> wrote:
> Hi,
>
> I have a field with timestamp data in Cassandra for
Hi,
I have a field with timestamp data in Cassandra for example - 2017-09-20
10:25:46.752000+.
I am not able to import the data using Solr DataImportHandler, getting the
bellow error in the Solr log.
Caused by: java.lang.StringIndexOutOfBoundsException: String index out of
range: -1
I am
: Recently, we have switched over to use atomic update instead of re-indexing
: when we need to update a doc in the index. It looks to me that the
: timestamp field is not updated during an atomic update. I have also looked
: into TimestampUpdateProcessorFactory and it looks to me that won
I have a timestamp field in my schema to track when each doc was indexed:
Recently, we have switched over to use atomic update instead of re-indexing
when we need to update a doc in the index. It looks to me that the
timestamp field is not updated during an atomic update. I have also looked
It would be nice to have "unformatted" or "timestamp" or "long" (maybe all
three) as an accepted format for the parse date update processor. Seems like
a reasonable use case. But... the standard use of parsing is to chain the
types in a hierarchy, with date and the
at 4:00 PM, Alexandre Rafalovitch
wrote:
> Hello,
>
> My data comes with the timestamp 12345654. I want that indexed as a date.
>
> It does not seem to be happening with default date type and none of
> the URPs seem to recognize that format.
>
> Is there something
Hello,
My data comes with the timestamp 12345654. I want that indexed as a date.
It does not seem to be happening with default date type and none of
the URPs seem to recognize that format.
Is there something terribly obvious I am missing?
Regards,
Alex.
Personal website: http
: rather than seconds. This is how Java deals with time internally. I'm fairly
: sure that this is also how Solr's date types work internally.
More specifically: the QParser is giving you that query, because the
FieldType you have for the specified field (prbably TrieDateField) is
parsing the
ing the date
to unix timestamp format.(Date:([132217920 TO 132226559])). is there
anyway to get the same date as we passed.
QParser queryParser = QParser.getParser(q, defType, req);
Query query = queryParser.getQuery();
DocList matchDocs2 = indexSearcher.getDocList(query1, null, null, 1,
10
Hi,
We are writing our own search handler. We are facing this below issue.
We are passing a
date(Date:(["2012-10-01T00:00:00.000Z"+TO+"2012-10-01T23:59:59.999Z"])) for
date range search to QParser.getParser method but it is converting the date
to unix timestamp format.(Dat
context:
http://lucene.472066.n3.nabble.com/Index-timestamp-of-pdf-in-unix-timeformat-tp4081256.html
Sent from the Solr - User mailing list archive at Nabble.com.
you have something else in your database that is a better indicator
of what's new than a timestamp, you can use that, you just have to pass
it in as a parameter when you access the dataimport URL by HTTP. If the
parameter on the URL is &mycolumn=myvalue then you can use
${dih.request.my
column. Therefore i used 'scn_to_timestamp(ora_rowscn)' to give the
value of the required timestamps. This query returns the value of type
timestamp in the following format 24-JUL-13 12.42.32.0 PM and
dih.last_index_time is in the format 2013-07-24 12:18:03. So, I changed the
Hallo,
>
>: > SELECT ... CAST(LAST_ACTION_TIMESTAMP AS DATE) AS LAT
>:
>: This removes the time part of the timestamp in SOLR. althought it is
>: shown in PL/SQL-Developer (Tool for Oracle).
>
> Hmmm... that makes no sense to me based on 10 seconds
On 5/16/2013 11:00 AM, Chris Hostetter wrote:
There must be *some* way to either tweak your SQL or tweak your JDBC
connection properties such that Oracle's JDBC driver will give you a
legitimate java.sql.Date or java.sql.Timestamp instead of it's own
internal class (that doesn't extend java.util.
: > SELECT ... CAST(LAST_ACTION_TIMESTAMP AS DATE) AS LAT
:
: This removes the time part of the timestamp in SOLR. althought it is shown
: in PL/SQL-Developer (Tool for Oracle).
Hmmm... that makes no sense to me based on 10 seconds of googling...
http://docs.oracle.com/cd/B28359
Hallo,
>: I have a field with the type TIMESTAMP(6) in an oracle view.
> ...
>: What is the best way to import it?
> ...
>: This way works but I do not know if this is the best practise:
> ...
>: TO_CHAR(LAST_ACTION_TIMESTAMP, '-MM-DD HH24:
: I have a field with the type TIMESTAMP(6) in an oracle view.
...
: What is the best way to import it?
...
: This way works but I do not know if this is the best practise:
...
: TO_CHAR(LAST_ACTION_TIMESTAMP, '-MM-DD HH24:MI:SS') as LAT
instead of h
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isnât a Game
On Wed, May 8, 2013 at 9:35 AM, Peter Schütt wrote:
> Hallo,
> I have a field with the type TIMESTAMP(6) in an oracle view.
>
> When I want
Hallo,
I have a field with the type TIMESTAMP(6) in an oracle view.
When I want to import it directly to SOLR I get this error message:
WARNING: Error creating document : SolrInputDocument[oid=12,
last_action_timestamp=oracle.sql.TIMESTAMP@34907781, status=2
It is a date field.
./zahoor
On 19-Apr-2013, at 5:02 PM, Erick Erickson wrote:
> I'm guessing that your timestamp is a tdate, which stores "extra"
> information in the index for fast range searches. What happens if you
> try to facet on just a "date" fi
I'm guessing that your timestamp is a tdate, which stores "extra"
information in the index for fast range searches. What happens if you
try to facet on just a "date" field?
Best
Erick
On Thu, Apr 18, 2013 at 8:37 AM, J Mohamed Zahoor wrote:
> Hi
>
> I am using
Hi
I am using SOlr 4.1 with 6 shards.
i want to find out some "price" stats for all the days in my index.
I ended up using stats component like
"stats=true&stats.field=price&stats.facet=timestamp".
but it throws up error like
Invalid Date String:' #1;#0;
Solr requires precise date formats, see:
http://lucene.apache.org/solr/api-4_0_0-BETA/org/apache/solr/schema/DateField.html
Best
Erick
On Sun, Apr 14, 2013 at 11:43 AM, ursswak...@gmail.com
wrote:
> Hi,
>
> To index Date in Solr, Date should be in ISO format.
> Can we index MySQL
Hi,
To index Date in Solr, Date should be in ISO format.
Can we index MySQL Timestamp or Date Time feild with out modifying SQL
Select Statement ?
I have used
CreatedDate is of Type Date Time in MySQL
I am getting following exception
11:23:39,117 WARN
Hoss Man suggested a wonderful solution for this need:
Always set update="add" to the field you want to keep (is exists), and use
FirstFieldValueUpdateProcessorFactory in the update chain, after
DistributedUpdateProcessorFactory (so the AtomicUpdate will add the
existing field before, if exists).
Nobody responded my JIRA issue :(
Should I commit this patch into SVN's trunk, and set the issue as Resolved?
On Sun, Feb 17, 2013 at 9:26 PM, Isaac Hebsh wrote:
> Thank you Alex.
> Atomic Update allows you to "add" new values into multivalued field, for
> example... It means that the original
Hi all,
I have json file in which there is field name last_login and value of that
field in timestamp.
I want to store that value in timestamp. do not want to change field type.
Now question is how to store timestamp so that when i need output in
datetime format it give date time format and
Thank you Alex.
Atomic Update allows you to "add" new values into multivalued field, for
example... It means that the original document is being read (using
RealTimeGet, which depends on updateLog).
There is no reason that the list of operations (add/set/inc) will not
include a "create-only" operat
Unless it is an Atomic Update, right. In which case Solr/Lucene will
actually look at the existing document and - I assume - will preserve
whatever field got already populated as long as it is stored. Should work
for default values as well, right? They get populated on first creation,
then that doc
.. But, guys, I
> think that it's a basic feature, and it will be better if Solr will support
> it without "external help"...
>
>
> On Sun, Feb 17, 2013 at 12:37 AM, Upayavira wrote:
>
>> I think what Walter means is make the thing that sends it
7 AM, Upayavira wrote:
> I think what Walter means is make the thing that sends it to Solr set
> the timestamp when it does so.
>
> Upayavira
>
> On Sat, Feb 16, 2013, at 08:56 PM, Isaac Hebsh wrote:
> > Hi,
> > I do have an externally-created timestamp, but some minut
I think what Walter means is make the thing that sends it to Solr set
the timestamp when it does so.
Upayavira
On Sat, Feb 16, 2013, at 08:56 PM, Isaac Hebsh wrote:
> Hi,
> I do have an externally-created timestamp, but some minutes may pass
> before
> it will be sent to Solr.
&g
Hi,
I do have an externally-created timestamp, but some minutes may pass before
it will be sent to Solr.
On Sat, Feb 16, 2013 at 10:39 PM, Walter Underwood wrote:
> Do you really want the time that Solr first saw it or do you want the time
> that the document was really created in the sys
Do you really want the time that Solr first saw it or do you want the time that
the document was really created in the system? I think an external create
timestamp would be a lot more useful.
wunder
On Feb 16, 2013, at 12:37 PM, Isaac Hebsh wrote:
> I opened a JIRA for this improvem
> Hi.
>
> I have a 'timestamp' field, which is a date, with a default value of 'NOW'.
> I want it to represent the datetime when the item was inserted (at the
> first time).
>
> Unfortunately, when the item is updated, the timestamp is changed...
>
> How can I implement INSERT TIME automatically?
>
I suspected it was to avoid caching, but I thought what was the harm of
caching at http level taking place if it's just suggestions, I would say it
would be even better.
So I can remove it...
thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/solritas-time
It's just a function of the jquery suggest component being used, if I recall
correctly, to ensure that HTTP caching doesn't get involved since the request
changes by the timestamp for every request. I imagine it can be "safely" (at
the risk of getting cached results
Hi,
I am studying solristas with its browse UI that comes in 3.5.0 example. I
have noticed the calls to /terms in order to get autocompletion terms have a
'timestamp' parameter.
What is it for? I did not find any such param in solr docs.
Can be safely be removed?
thanks
--
View th
ing delta query without using timestamp
> column.
> and explain me briefly.
>
>
>
>
> thanx in advance.
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/is-it-possible-to-write-delta-query-without-using-timestamp-column-tp3412105p34
thanks erik,
but give me any examples on writing delta query without using timestamp
column.
and explain me briefly.
thanx in advance.
--
View this message in context:
http://lucene.472066.n3.nabble.com/is-it-possible-to-write-delta-query-without-using-timestamp-column-tp3412105p3412270
Yes. But the query will best work if you have some "delta" criteria. Otherwise
you might as well do a full import.
Erik
On Oct 11, 2011, at 5:36, vighnesh wrote:
> hello everyone
>
> is it possible to write delta querys without using timestamp column in
> database
hello everyone
is it possible to write delta querys without using timestamp column in
database table?
Thanks in advance.
Regards
Vighnesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/is-it-possible-to-write-delta-query-without-using-timestamp-column
the ad_thumb entity...
Now my guess is that the problem is that nowhere is saved WHEN the thumbid
was changed, the only place where the new thumb is updated is when the value
of column [ad].[thumbid] has changed, but without a timestamp.
I tried this:
additional
: condition:
: I am using the last-Modified-Date from crawled web pages as the date
: to consider, and that does not always provide a meaningful date.
: Therefore I would like the function to only boost documents where the
: date (not time) found in the last-Modified-Date is different fro
www.gettinhahead.co.in
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to-convert-date-timestamp-to-long-in-data-config-xml-tp2907125p2907229.html
Sent from the Solr - User mailing list archive at Nabble.com.
SOLR : 1.4.1
There are 1,300,000+ documents in the index. Sorting on a date field with
timestamp leads to OutOfMemoryError. So, we are looking for a way to copy
the timestamp as a long value to a field and sort based on that field. Can
any one help me on how to convert the timestamp to a long
the date
to consider, and that does not always provide a meaningful date.
Therefore I would like the function to only boost documents where the
date (not time) found in the last-Modified-Date is different from the
timestamp, eliminating results that just return the current date as
the last-Modified
It turn out you don't need to use dateFormatTransformer at all. The reason
why the timestamp mysql column fail to be inserted to solr is because in
schema.xml i mistakenly set "index=false, stored=false". Of course that
won't make it come to index at all. No wonder schema
exing-mysql-dateTime-timestamp-into-solr-date-field-tp2608327p2609053.html
Sent from the Solr - User mailing list archive at Nabble.com.
the date part of the information out of a
> datetime field?
>
> Any thought on this?
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/indexing-mysql-dateTime-timestamp-into-solr-date-field-tp2608327p2608452.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
ok to just get the date part of the information out of a
datetime field?
Any thought on this?
--
View this message in context:
http://lucene.472066.n3.nabble.com/indexing-mysql-dateTime-timestamp-into-solr-date-field-tp2608327p2608452.html
Sent from the Solr - User mailing list archive at
: query="select ID, title_full as TITLE_NAME, YEAR,
: COUNTRY_OF_ORIGIN, modified as RELEASE_DATE from title limit 10">
Are you certian that the first 10 results returned (you have "limit 10")
all have a value in the "modified" field?
if modified is nullable you could ver
solr document, there is no term populated for release_date
field. All other fields are populated with terms.
The field, "release_date" is a solr date type field.
Appreciate your help.
--
View this message in context:
http://lucene.472066.n3.nabble.com/indexing-mysql-dateTime-time
Cool.. Thanks Koji...
--Sid
On Sat, Dec 4, 2010 at 3:16 AM, Koji Sekiguchi wrote:
> (10/11/24 6:05), Siddharth Powar wrote:
>
>> Hey,
>>
>> Is it possible to read the timestamp that the DataImportHandler uses for a
>> delta-import from a location other th
(10/11/24 6:05), Siddharth Powar wrote:
Hey,
Is it possible to read the timestamp that the DataImportHandler uses for a
delta-import from a location other than "conf/dataimport.properties".
Thanks,
Sid
No. There is an open issue for this problem:
https://issues.apache.org/jira/b
Hey,
Is it possible to read the timestamp that the DataImportHandler uses for a
delta-import from a location other than "conf/dataimport.properties".
Thanks,
Sid
from 'http://blogs.techrepublic.com.com/security/?p=4501&tag=nl.e036'
EARTH has a Right To Life,
otherwise we all die.
- Original Message
From: Toke Eskildsen
To: "solr-user@lucene.apache.org"
Sent: Mon, November 1, 2010 11:45:34 PM
Subject: RE: Ensuring stable times
Dennis Gearon [gear...@sbcglobal.net] wrote:
> how about a timrstamp with either a GUID appended on the end of it?
Since long (8 bytes) is the largest atomic type supported by Java, this would
have to be represented as a String (or rather BytesRef) and would take up 4 +
32 bytes + 2 * 4 bytes f
;http://blogs.techrepublic.com.com/security/?p=4501&tag=nl.e036'
EARTH has a Right To Life,
otherwise we all die.
--- On Sun, 10/31/10, Toke Eskildsen wrote:
> From: Toke Eskildsen
> Subject: RE: Ensuring stable timestamp ordering
> To: "solr-user@lucene.apache.org"
Dennis Gearon [gear...@sbcglobal.net] wrote:
> Even microseconds may not be enough on some really good, fast machine.
True, especially since the timer might not provide microsecond granularity
although the returned value is in microseconds. However, an unique timestamp
generator should k
;http://blogs.techrepublic.com.com/security/?p=4501&tag=nl.e036'
EARTH has a Right To Life,
otherwise we all die.
--- On Sun, 10/31/10, Toke Eskildsen wrote:
> From: Toke Eskildsen
> Subject: RE: Ensuring stable timestamp ordering
> To: "solr-user@lucene.apache.org"
imply having NOW be more fine-grained, and this does seem like
something that would be nice to have in a fairly low level, but as I
said, if it would introduce backward-compatibility problems, it's easy
enough to create a timestamp field in the indexing feed.
Thank you for clarifying this.
-Mi
Lance Norskog [goks...@gmail.com] wrote:
> It would be handy to have an auto-incrementing date field, so that
> each document would get a unique number and the timestamp would then
> be the unique ID of the document.
If someone want to implement this, I'll just note that the grani
ementing date field, so that
> each document would get a unique number and the timestamp would then
> be the unique ID of the document.
>
> On Sat, Oct 30, 2010 at 7:19 PM, Erick Erickson
> wrote:
> > What are the actual values in your index? I'm wondering if they
> &
Hi-
NOW does not get re-run for each document. If you give a large upload
batch, the same NOW is given to each document.
It would be handy to have an auto-incrementing date field, so that
each document would get a unique number and the timestamp would then
be the unique ID of the document.
On
our index.
Not much help, but the best I can do this evening.
Erick
On Thu, Oct 28, 2010 at 9:58 PM, Michael Sokolov wrote:
> (Sorry - fumble finger sent too soon.)
>
>
> My confusion stems from the fact that in my test I insert a number of
> documents, and then retrieve them or
(Sorry - fumble finger sent too soon.)
My confusion stems from the fact that in my test I insert a number of
documents, and then retrieve them ordered by timestamp, and they don't come
back in the same order they were inserted (the order seems random), unless I
commit after each insert.
I'm curious what if any guarantees there are regarding the "timestamp" field
that's defined in the sample solr schema.xml. Just for completeness, the
definition is:
The "date" type in schema.xml does this. It is a Trie type, meaning it
stores very efficiently.
http://search.lucidimagination.com/search/out?u=http%3A%2F%2Fwiki.apache.org%2Fsolr%2FSolrQuerySyntax
On Sat, Oct 2, 2010 at 11:08 AM, Dennis Gearon wrote:
> Is there a timestamp colum
Is there a timestamp column in Solr,i.e. I could feed it something like:
2010-10-15T23:59:59
And it's indexable of course :-)
Dennis Gearon
Signature Warning
EARTH has a Right To Life,
otherwise we all die.
Read 'Hot, Flat, and Crowded'
Laugh at http
;problem".
Well, I must be careful when using this field.
Thanks for your answer,
Frederico
-Original Message-
From: Jan Høydahl / Cominvent [mailto:jan@cominvent.com]
Sent: quarta-feira, 11 de Agosto de 2010 12:17
To: solr-user@lucene.apache.org
Subject: Re: timestamp field
Hi,
be careful when using this field.
Thanks for your answer,
Frederico
-Original Message-
From: Jan Høydahl / Cominvent [mailto:jan@cominvent.com]
Sent: quarta-feira, 11 de Agosto de 2010 12:17
To: solr-user@lucene.apache.org
Subject: Re: timestamp field
Hi,
Which time zone are you loc
Hi,
Which time zone are you located in? Do you have DST?
Solr uses UTC internally for dates, which means that "NOW" will be the time in
London right now :) Does that appear to be right 4 u?
Also see this thread: http://search-lucene.com/m/hqBed2jhu2e2/
--
Jan Høydahl, search solution architect
Hi,
I have on my schema
This field is returned as
2010-08-11T10:11:03.354Z
For an article added at 2010-08-11T11:11:03.354Z!
And the server has the time of 2010-08-11T11:11:03.354Z...
This is a w2003 server using solr 1.4.
Any guess of what could be wrong here?
Tha
er-30-minutes-agos-last-doc approach.
Query for latest timestamp by sorting by timestamp descending, set rows=1, the
row you get back has the greatest timestamp.
30 minutes later, query with fq=timestamp>that_one_we_remembered.
Would this be any slower with timestamps than with docids? I
ex what are the documents that were added
to
>>it, since the last time i queried it, that match a certain criteria.
>>From time to time, once a week or so, i ask the index for ALL the
documents
>>that match that criteria. (i also do this for not only one query, but
>>several
a week or so, i ask the index for ALL the documents
>that match that criteria. (i also do this for not only one query, but
>several)
>This is why i need the timestamp filter.
Again, I'm not entirely sure that quering / filtering on internal docid's is
possible (perhaps someone can c
: On top of using trie dates, you might consider separating the timestamp
: portion and the type portion of the fq into seperate fq parameters --
: that will allow them to to be stored in the filter cache seperately. So
: for instance, if you include "type:x OR type:y" in queries
> and a typical query would be:
>
fl=id,type,timestamp,score&start=0&q="Coca+Cola"+pepsi+-"dr+pepper"&fq=timestamp:[2010-07-07T00:00:00Z+TO+NOW]+AND+(type:x+OR+type:y)&
> rows=2000
On top of using trie dates, you might consider separating the time
and a typical query would be:
>
fl=id,type,timestamp,score&start=0&q="Coca+Cola"+pepsi+-"dr+pepper"&fq=timestamp:[2010-07-07T00:00:00Z+TO+NOW]+AND+(type:x+OR+type:y)&
> rows=2000
My understanding is that this is essentially what the solr 1.4 trie date
fie
> and a typical query would be:
> fl=id,type,timestamp,score&start=0&q="Coca+Cola"+pepsi+-"dr+pepper"&fq=timestamp:[2010-07-07T00:00:00Z+TO+NOW]+AND+(type:x+OR+type:y)&
> rows=2000
My understanding is that this is essentially what the solr 1.4 trie da
I don't specify any sort order, and i do request for the score, so it is
ordered based on that.
My schema consists of these fields:
(changing now to tdate)
and a typical query would be:
fl=id,type,timestamp,score&start=0&q="Coca+Cola"+pepsi+-"dr+pepper"&a
30 minutes, i ask the index what are the documents that were added to
: it, since the last time i queried it, that match a certain criteria.
: >From time to time, once a week or so, i ask the index for ALL the documents
: that match that criteria. (i also do this for not only one query, but
t; : after you update your index that involve your timestamp?
>
> based on the usecase, i'm not sure that that will really help -- it sounds
> like the range query is alwasy based on the exact timestamp of the most
> recent doc from the lat time this particular query was run --
: updating your index between queries, so you may be reloading
: your cache every time. Are you updating or adding documents
: between queries and if so, how?
:
: If this is vaguely on target, have you tried firing up warmup queries
: after you update your index that involve your timestamp
1 - 100 of 134 matches
Mail list logo