Collection timeout error

2015-01-19 Thread Manohar Sripada
I am getting collection timeout error while creating collection on Solr
Cloud. Below is the error.

> org.apache.solr.common.SolrException: createcollection the collection
time out:180s
 at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:252)
 at
org.apache.solr.handler.admin.CollectionsHandler.handleResponse(CollectionsHandler.java:233)


This SolrCloud was up and running from 2 weeks. Prior to this, I have
created/deleted/reloaded collections on this SolrCloud. Suddenly, I started
getting this error while trying to create a collection. I am using external
Zookeeper ensemble.

When I tried to connect to Zookeeper ensemble through command line. Below
is something I found out.

> ls /overseer/collection-queue-work

[qn-30, qn-20, qn-26, qn-18, qn-28,
qn-16, qn-22, qn-14, qn-24, qn-12]

> get /overseer/collection-queue-work/qn-12

{

  "operation":"createcollection",

  "fromApi":"true",

  "name":"coll_4",

  "replicationFactor":"2",

  "collection.configName":"collConfig",

  "numShards":"16",

  "maxShardsPerNode":"4"}


There are many such requests which were in queue. All are related to
Collection API requests (eg: CREATE, DELETE, RELOAD etc). Can anyone please
tell me why Solr/ZK went into this state only for collection related APIs?
Although restart of Solr and Zookeeper worked, but, I can't ask for a
restart in production whenever this occurs. So, wanted to find the root
cause.



Thanks


Solr Users Mailing lists in languages other than English?

2015-01-19 Thread Alexandre Rafalovitch
Hi,

Are there any non-English mailing lists for Solr Users?

I know there is Thai language one:
https://groups.google.com/forum/#!forum/solr-user-thailand .

 What about others, like Japanese or German or Russian?

Regards,
   Alex.

Sign up for my Solr resources newsletter at http://www.solr-start.com/


Re: Need Debug Direction on Performance Problem

2015-01-19 Thread Naresh Yadav
Michael, i tried your idea of implementing own cursor in solr 4.6.1 itself
but some how that testcase was taking huge time.
Then i tried Cursor approach by upgrading solr to 4.10.3. With that got
better results. For Setup 2 now time reduced from
114 minutes to 18 minutes but still little far from Setup1 i.e 2 minutes.
Actually first 50 thousand request it self is taking about a minute. May be
i would need to see other things as pagination seems working better now.

thanks for giving valuable suggestions.

On Mon, Jan 19, 2015 at 11:20 AM, Naresh Yadav  wrote:

> Toke, won't be able to use TermsComponent as i had complex filter criteria
> on other fields.
>
> Michael, i understood your idea of paging without using start=,
> will prototype it as it is possible in my usecase also and post here
> results i got with this approach.
>
>
> On Sun, Jan 18, 2015 at 10:05 PM, Michael Sokolov <
> msoko...@safaribooksonline.com> wrote:
>
>> You can also implement your own cursor easily enough if you have a unique
>> sortkey (not relevance score). Say you can sort by id, then you select
>> batch 1 (50k docs, say) and record the last (maximum) id in the batch.  For
>> the next batch, limit it to id > last_id and get the first 50k docs (don't
>> use start= for paging).  This scales much better when scanning a large
>> result set; you'll get constant time across the whole set instead of having
>> it increase as you page deeper.
>>
>> -Mike
>>
>>
>> On 1/18/2015 7:45 AM, Naresh Yadav wrote:
>>
>>> Hi Toke,
>>>
>>> Thanks for sharing solr internal's for my problem. I will definitely try
>>> Cursor also but only problem is my current
>>> solr version is 4.6.1 in which i guess cursor support is not there. Any
>>> other option i have for this problem ??
>>>
>>> Also as per your suggestion i will try to avoid regional units in post.
>>>
>>> Thanks
>>> Naresh
>>>
>>> On Sun, Jan 18, 2015 at 4:19 PM, Toke Eskildsen 
>>> wrote:
>>>
>>>  Naresh Yadav [nyadav@gmail.com] wrote:

> In both setups, we are reading in batches of 50k and each batch taking
> Setup1  : approx 7 seconds and for completing all batches of total 10
>
 lakh

> results takes 1 to 2 minutes.
> Setup2 : approx 2-3 minutes and for completing all batches of total 10
>
 lakh

> results  takes 114 minutes.
>
 Deep paging across shards without cursors means that for each request,
 the
 full result set up to that point must be requested from each shard. The
 deeper your page, the longer it takes for each request. If you only
 extracted 500K results instead of the 1M in setup 2, it would likely
 take a
 lot less than 114/2 minutes.

 Since you are exporting the full result set, you should be using a
 cursor:
 https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results
 This should make your extraction linear to the number of documents and
 hopefully a lot faster than your current setup.

 Also, please refrain from using regional units such as "lakh" in an
 international forum. It requires some readers (me for example) to
 perform a
 search in order to be sure on what you are talking about.

 - Toke Eskildsen


>>
>
>
>
>


Re: Core deletion

2015-01-19 Thread Dominique Bejean
Hi,

inytapdf0 is the target core ?

No typo in these path ?

instanceDir:/archives/solr/example/solr/indexapdf0/
dataDir:/archives/indexpdf0/data/

indexpdf0 instead of indexapdf0 for dataDir ?

Dominique

2015-01-15 15:21 GMT+01:00 :

> I duplicated an exist core, deleted the data directory and
> core.properties, updated solrconfig.xml and schema.xml and loaded the new
> core in SOLR's Admin Panel.
>
> The logs contain a few 'index locked' errors:
>
>
> solr.log:INFO  - 2015-01-15 14:43:09.492;
> org.apache.solr.core.CorePropertiesLocator; Found core inytapdf0 in
> /archives/solr/example/solr/inytapdf0/
> solr.log:INFO  - 2015-01-15 14:49:17.685;
> org.apache.solr.core.CorePropertiesLocator; Found core inytapdf0 in
> /archives/solr/example/solr/inytapdf0/
> solr.log.1:INFO  - 2015-01-05 18:08:13.253;
> org.apache.solr.core.CorePropertiesLocator; Found core inytapdf0 in
> /archives/solr/example/solr/inytapdf0/
> solr.log.1:ERROR - 2015-01-05 18:08:17.467;
> org.apache.solr.core.CoreContainer; Error creating core [inytapdf0]: Index
> locked for write for core inytapdf0
> solr.log.1:org.apache.solr.common.SolrException: Index locked for write
> for core inytapdf0
> solr.log.1:Caused by: org.apache.lucene.store.LockObtainFailedException:
> Index locked for write for core inytapdf0
> solr.log.1:INFO  - 2015-01-06 09:19:32.125;
> org.apache.solr.core.CorePropertiesLocator; Found core inytapdf0 in
> /archives/solr/example/solr/inytapdf0/
> solr.log.1:ERROR - 2015-01-06 09:19:35.305;
> org.apache.solr.core.CoreContainer; Error creating core [inytapdf0]: Index
> locked for write for core inytapdf0
> solr.log.1:org.apache.solr.common.SolrException: Index locked for write
> for core inytapdf0
> solr.log.1:Caused by: org.apache.lucene.store.LockObtainFailedException:
> Index locked for write for core inytapdf0
>
>
> Philippe
>
>
>
>
> - Mail original -
> De: "Dominique Bejean" 
> À: solr-user@lucene.apache.org
> Envoyé: Jeudi 15 Janvier 2015 11:46:43
> Objet: Re: Core deletion
>
> Hi,
>
> Is there something in solr logs at startup that can explain the deletion ?
>
> How were created the cores ? using cores API ?
>
> Dominique
> http://www.eolya.fr
>
>
> 2015-01-14 17:43 GMT+01:00 :
>
> >
> >
> > Hello,
> >
> > I am running SOLR 4.10.0 on Tomcat 8.
> >
> > The solr.xml file in
> > .../apache-tomcat-8.0.15_solr_8983/conf/Catalina/localhost looks like
> this:
> >
> >
> > 
> >  > crossContext="true">
> >  > value="/archives/solr/example/solr" override="true"/>
> > 
> >
> > My SOLR instance contains four cores, including one whose instanceDir and
> > dataDir have the following values:
> >
> >
> > instanceDir:/archives/solr/example/solr/indexapdf0/
> > dataDir:/archives/indexpdf0/data/
> >
> > Strangely enough, every time I restart Tomcat, this core's data, [and
> only
> > this core's data,] get deleted, which is pretty annoying.
> >
> > How can I prevent it?
> >
> > Many thanks.
> >
> > Philippe
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
>


Distinct Results from Solr Query

2015-01-19 Thread vit
I am using Solr 4.2
In the ressults set we are getting documents with the same field value.
Is it possible to indicate in the query that we need results with distinct
value of this field?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Distinct-Results-from-Solr-Query-tp4180471.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Distinct Results from Solr Query

2015-01-19 Thread vit
In other words I need to pick only one document per the field value. 
Say I have a filed cat_id. For each value of this filed returned I need to
return only 1 document and I do not care which one.  



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Distinct-Results-from-Solr-Query-tp4180471p4180483.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Distinct Results from Solr Query

2015-01-19 Thread Jorge Luis Betancourt González
I think this sounds like grouping results by field?
You should enable groups by adding &group=true&group.field=YOURFIELD to test 
this feature.

For each unique value of the field specified in group.field, Solr returns a 
docList with the *top scoring document*. In the docList you can see the total 
number of matches in that group.

Hope it helps,

- Original Message -
From: "vit" 
To: solr-user@lucene.apache.org
Sent: Monday, January 19, 2015 1:56:43 PM
Subject: Re: Distinct Results from Solr Query

In other words I need to pick only one document per the field value. 
Say I have a filed cat_id. For each value of this filed returned I need to
return only 1 document and I do not care which one.  



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Distinct-Results-from-Solr-Query-tp4180471p4180483.html
Sent from the Solr - User mailing list archive at Nabble.com.


---
XII Aniversario de la creación de la Universidad de las Ciencias Informáticas. 
12 años de historia junto a Fidel. 12 de diciembre de 2014.



Re: Distinct Results from Solr Query

2015-01-19 Thread vit
Unfortunately grouping will not work here since my field is multi-valued. 
So I need another solution.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Distinct-Results-from-Solr-Query-tp4180471p4180499.html
Sent from the Solr - User mailing list archive at Nabble.com.


How to return custom collector info

2015-01-19 Thread tedsolr
I am investigating possible modifications to the CollapsingQParserPlugin that
will allow me to collapse documents based on multiple fields. In a quick
test I was able to make this happen with two fields, so I assume I can
expand that to N fields. 

What I'm missing now is the extra data I need per group - the count of
collapsed docs and a summation on one numeric field. With single field
collapsing I could get this info from the standard stats component by using
tagging/excluding on the post filter and setting a stats facet field. Once
there are multiple fields, I lose the "free" stats info since faceting only
works with one field.

So I'm looking for advice on where/when to collect the extra data, and how
to transport it back to the caller. My first thought is to compute the info
in the collect() method of the DelegatingCollector, and store it with the
filter (somehow) so it can be retrieved in a later custom SearchComponent.
But I've read it is NOT a good idea to get a document within the collect()
method. What is the right way (place) to access a doc field value (not the
ordinal)?

I read a post by Joel B. where he said you could get access to a
ResponseBuilder directly from a post filter via a static SolrRequestInfo
call. Does this mean I could compute the extra data I need in the post
filter, AND write it out to the response (from the finish() method I guess)?
No need for a custom SearchComponent? I was thinking I would have to follow
the ExpandComponent model to get the data from the filter, then write it out
in the process() method.

This is my first attempt at customizing Solr so I may not be expressing
myself clearly. Thank you for any pointers you can provide.
(using Solr 4.9)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-return-custom-collector-info-tp4180502.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Distinct Results from Solr Query

2015-01-19 Thread Kydryavtsev Andrey
Sounds like you need to develop your own post filter collector. In this 
collector you can get value for your field cat_id by document id, store it in 
collection and filter out documents which values are presented in this 
collection.  It similar with 
http://lucidworks.com/blog/custom-security-filtering-in-solr/ , but other 
custom logic. 

20.01.2015, 00:25, "vit" :
> Unfortunately grouping will not work here since my field is multi-valued.
> So I need another solution.
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Distinct-Results-from-Solr-Query-tp4180471p4180499.html
> Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to return custom collector info

2015-01-19 Thread Joel Bernstein
You may want to take a look at the AnalyticsQuery:
http://heliosearch.org/custom-analytics-engine/

This is an extension to the PostFIlter API that gives you direct access to
the ResponseBuilder.

Joel Bernstein
Search Engineer at Heliosearch

On Mon, Jan 19, 2015 at 4:28 PM, tedsolr  wrote:

> I am investigating possible modifications to the CollapsingQParserPlugin
> that
> will allow me to collapse documents based on multiple fields. In a quick
> test I was able to make this happen with two fields, so I assume I can
> expand that to N fields.
>
> What I'm missing now is the extra data I need per group - the count of
> collapsed docs and a summation on one numeric field. With single field
> collapsing I could get this info from the standard stats component by using
> tagging/excluding on the post filter and setting a stats facet field. Once
> there are multiple fields, I lose the "free" stats info since faceting only
> works with one field.
>
> So I'm looking for advice on where/when to collect the extra data, and how
> to transport it back to the caller. My first thought is to compute the info
> in the collect() method of the DelegatingCollector, and store it with the
> filter (somehow) so it can be retrieved in a later custom SearchComponent.
> But I've read it is NOT a good idea to get a document within the collect()
> method. What is the right way (place) to access a doc field value (not the
> ordinal)?
>
> I read a post by Joel B. where he said you could get access to a
> ResponseBuilder directly from a post filter via a static SolrRequestInfo
> call. Does this mean I could compute the extra data I need in the post
> filter, AND write it out to the response (from the finish() method I
> guess)?
> No need for a custom SearchComponent? I was thinking I would have to follow
> the ExpandComponent model to get the data from the filter, then write it
> out
> in the process() method.
>
> This is my first attempt at customizing Solr so I may not be expressing
> myself clearly. Thank you for any pointers you can provide.
> (using Solr 4.9)
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/How-to-return-custom-collector-info-tp4180502.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: How to return custom collector info

2015-01-19 Thread Joel Bernstein
Here is actually the a more useful link for understanding how the
AnalyticsQuery works:
http://heliosearch.org/solrs-new-analyticsquery-api/

Joel Bernstein
Search Engineer at Heliosearch

On Mon, Jan 19, 2015 at 4:57 PM, Joel Bernstein  wrote:

> You may want to take a look at the AnalyticsQuery:
> http://heliosearch.org/custom-analytics-engine/
>
> This is an extension to the PostFIlter API that gives you direct access to
> the ResponseBuilder.
>
> Joel Bernstein
> Search Engineer at Heliosearch
>
> On Mon, Jan 19, 2015 at 4:28 PM, tedsolr  wrote:
>
>> I am investigating possible modifications to the CollapsingQParserPlugin
>> that
>> will allow me to collapse documents based on multiple fields. In a quick
>> test I was able to make this happen with two fields, so I assume I can
>> expand that to N fields.
>>
>> What I'm missing now is the extra data I need per group - the count of
>> collapsed docs and a summation on one numeric field. With single field
>> collapsing I could get this info from the standard stats component by
>> using
>> tagging/excluding on the post filter and setting a stats facet field. Once
>> there are multiple fields, I lose the "free" stats info since faceting
>> only
>> works with one field.
>>
>> So I'm looking for advice on where/when to collect the extra data, and how
>> to transport it back to the caller. My first thought is to compute the
>> info
>> in the collect() method of the DelegatingCollector, and store it with the
>> filter (somehow) so it can be retrieved in a later custom SearchComponent.
>> But I've read it is NOT a good idea to get a document within the collect()
>> method. What is the right way (place) to access a doc field value (not the
>> ordinal)?
>>
>> I read a post by Joel B. where he said you could get access to a
>> ResponseBuilder directly from a post filter via a static SolrRequestInfo
>> call. Does this mean I could compute the extra data I need in the post
>> filter, AND write it out to the response (from the finish() method I
>> guess)?
>> No need for a custom SearchComponent? I was thinking I would have to
>> follow
>> the ExpandComponent model to get the data from the filter, then write it
>> out
>> in the process() method.
>>
>> This is my first attempt at customizing Solr so I may not be expressing
>> myself clearly. Thank you for any pointers you can provide.
>> (using Solr 4.9)
>>
>>
>>
>> --
>> View this message in context:
>> http://lucene.472066.n3.nabble.com/How-to-return-custom-collector-info-tp4180502.html
>> Sent from the Solr - User mailing list archive at Nabble.com.
>>
>
>


Re: Solr Users Mailing lists in languages other than English?

2015-01-19 Thread Tomoko Uchida
Hi,

There's no mailing lists in Japan as far as I know. A Google group seems to
be not used any more.
(Other than mailing lists, there are user meetups in Tokyo.)

Thanks,
Tomoko


2015-01-19 21:39 GMT+09:00 Alexandre Rafalovitch :

> Hi,
>
> Are there any non-English mailing lists for Solr Users?
>
> I know there is Thai language one:
> https://groups.google.com/forum/#!forum/solr-user-thailand .
>
>  What about others, like Japanese or German or Russian?
>
> Regards,
>Alex.
> 
> Sign up for my Solr resources newsletter at http://www.solr-start.com/
>


Re: Solr Users Mailing lists in languages other than English?

2015-01-19 Thread Shawn Heisey
 > There's no mailing lists in Japan as far as I know. A Google group seems
> to
> be not used any more.
> (Other than mailing lists, there are user meetups in Tokyo.)
>
> Thanks,
> Tomoko
>
>
> 2015-01-19 21:39 GMT+09:00 Alexandre Rafalovitch :
>
>> Hi,
>>
>> Are there any non-English mailing lists for Solr Users?

An open source project usually needs to settle on one common language for
discussion. For most projects with an international scope (like Solr),
that will be English. Not always, but that's what you'll commonly see.

Americans (I'm in that group) tend to be lazy, so that's one possible
reason. I think it's probably actually because English is the language
that's most common among all the participants, and us lazy Americans
usually ONLY speak English. ;)

Thanks,
Shawn




Can't get the DIH to recurse to index messages in Outlook PST file

2015-01-19 Thread Anton Shokhrin
Hi List,

My SOLR instance is setup to index PST files with DIH, TikaEntityProcessor and 
OutlookPSTParser. After running import, I can see that the index contains the 
top level information of the PST file (e.g. unique id of each message, header, 
PST file size) but the messages themselves are missing. I suspect that I need 
to instruct SOLR to recurse to the next level during indexing inside DIH config 
file but I don’t know how. My DIH config file looks like so:




Newly observed Facets

2015-01-19 Thread harish singh
Hi,

I have asked a question on Stackoverflow:
http://stackoverflow.com/questions/28036051/solr-newly-observed-facets

I searched the mailing list and found that not many reply there. So asking
the same question here:

 have two fields in my solr index data: "userName" and "startTimeISO" along
with many other fields. Now I want to query for all the "userNames" that
were seen TODAY but not seen in the last 30 days. Basically, I am trying to
find out Newly Observed UserNames for today.

Now the Solr Facet query I am running is:

facet.pivot: "userName,startTimeISO",
fq: " NOT startTimeISO:["2014-12-20T00:00:00.000Z" TO
"2015-01-18T00:00:00.000Z"] AND
startTimeISO:["2015-01-19T00:00:00.000Z" TO
"2015-01-20T00:00:00.000Z"]"

But I am for some reason getting incorrect results. For example, I see
userName: "bla" the above query. If I run the same query for tomorrow, I am
again see "bla" in my Facet Results.

I am some how not able to get the correct logic. Perhaps I am not using all
the tools provided by solr, which I am unaware of?

Can someone help me here? I dont mind testing all of your suggestions and
coming back and forth with different suggestions.


Thanks,

Harish


Re: Newly observed Facets

2015-01-19 Thread Alvaro Cabrerizo
At first impression, everything seems ok.

Anyway, is the startTimeISO single-value or multivalued field? In case it
is single-value the clause startTimeISO:["2015-01-19T00:
00:00.000Z" TO "2015-01-20T00:00:00.000Z"]" is sufficient to exclude other
period of time. I also guess that the startTimeISO fieldtype is date.

Other option for rewriting your query (just for testing as I can't find any
problem in the query you presented)

q=*:*
fq=startTimeISO:[NOW/DAY-1DAYS TO NOW]
raw parameters: facet.pivot=userName,startTimeISO

It will be also helpful to know the raw response, just to discard that the
name bla doesnt appear in different documents. For example usin:

query: userName:bla
fq=startTimeISO:[NOW/DAY-1DAYS TO NOW]
fl= id,userName, startTimeISO


Hope it helps.





On Tue, Jan 20, 2015 at 5:09 AM, harish singh 
wrote:

> Hi,
>
> I have asked a question on Stackoverflow:
> http://stackoverflow.com/questions/28036051/solr-newly-observed-facets
>
> I searched the mailing list and found that not many reply there. So asking
> the same question here:
>
>  have two fields in my solr index data: "userName" and "startTimeISO" along
> with many other fields. Now I want to query for all the "userNames" that
> were seen TODAY but not seen in the last 30 days. Basically, I am trying to
> find out Newly Observed UserNames for today.
>
> Now the Solr Facet query I am running is:
>
> facet.pivot: "userName,startTimeISO",
> fq: " NOT startTimeISO:["2014-12-20T00:00:00.000Z" TO
> "2015-01-18T00:00:00.000Z"] AND
> startTimeISO:["2015-01-19T00:00:00.000Z" TO
> "2015-01-20T00:00:00.000Z"]"
>
> But I am for some reason getting incorrect results. For example, I see
> userName: "bla" the above query. If I run the same query for tomorrow, I am
> again see "bla" in my Facet Results.
>
> I am some how not able to get the correct logic. Perhaps I am not using all
> the tools provided by solr, which I am unaware of?
>
> Can someone help me here? I dont mind testing all of your suggestions and
> coming back and forth with different suggestions.
>
>
> Thanks,
>
> Harish
>


Issue : Replacing ID with another will degrade performance in Solr?

2015-01-19 Thread Nitin Solanki
Hi,
 I am working on solr 4.10.2. I have been trapped into
the *performance
issue* where I have indexed 600MB data on 4 shards with single replicas
each. I have defined 2 fields (ngram and frequency). I have removed ID
field and replaced it with ngram field. Therefore, Search performance is
getting low and taking *QTime  = 134 ms* which is not well for my task.

*Schema.xml(sample part) *:-
*ngram field* -  


   









I  have posted same problem on Stackoverflow

but no able to get correct solution. Please help me.

Thanks and Regards,
 Nitin Solanki.


Re: Newly observed Facets

2015-01-19 Thread harish singh
yes. Fieldtype for startTimeISO is date.
My issue is I need to find out all the "newly observed" usernames for a
given day.
So, for today, If I am saying that username "xyz" is newly observed, then
it means that this username "xyz" was not present in the last 30 days data
and it was only observed today. So it is "new"

Hope I could explain it to you well. Ask me for any more questions

On Mon, Jan 19, 2015 at 11:45 PM, Alvaro Cabrerizo 
wrote:

> At first impression, everything seems ok.
>
> Anyway, is the startTimeISO single-value or multivalued field? In case it
> is single-value the clause startTimeISO:["2015-01-19T00:
> 00:00.000Z" TO "2015-01-20T00:00:00.000Z"]" is sufficient to exclude other
> period of time. I also guess that the startTimeISO fieldtype is date.
>
> Other option for rewriting your query (just for testing as I can't find any
> problem in the query you presented)
>
> q=*:*
> fq=startTimeISO:[NOW/DAY-1DAYS TO NOW]
> raw parameters: facet.pivot=userName,startTimeISO
>
> It will be also helpful to know the raw response, just to discard that the
> name bla doesnt appear in different documents. For example usin:
>
> query: userName:bla
> fq=startTimeISO:[NOW/DAY-1DAYS TO NOW]
> fl= id,userName, startTimeISO
>
>
> Hope it helps.
>
>
>
>
>
> On Tue, Jan 20, 2015 at 5:09 AM, harish singh 
> wrote:
>
> > Hi,
> >
> > I have asked a question on Stackoverflow:
> > http://stackoverflow.com/questions/28036051/solr-newly-observed-facets
> >
> > I searched the mailing list and found that not many reply there. So
> asking
> > the same question here:
> >
> >  have two fields in my solr index data: "userName" and "startTimeISO"
> along
> > with many other fields. Now I want to query for all the "userNames" that
> > were seen TODAY but not seen in the last 30 days. Basically, I am trying
> to
> > find out Newly Observed UserNames for today.
> >
> > Now the Solr Facet query I am running is:
> >
> > facet.pivot: "userName,startTimeISO",
> > fq: " NOT startTimeISO:["2014-12-20T00:00:00.000Z" TO
> > "2015-01-18T00:00:00.000Z"] AND
> > startTimeISO:["2015-01-19T00:00:00.000Z" TO
> > "2015-01-20T00:00:00.000Z"]"
> >
> > But I am for some reason getting incorrect results. For example, I see
> > userName: "bla" the above query. If I run the same query for tomorrow, I
> am
> > again see "bla" in my Facet Results.
> >
> > I am some how not able to get the correct logic. Perhaps I am not using
> all
> > the tools provided by solr, which I am unaware of?
> >
> > Can someone help me here? I dont mind testing all of your suggestions and
> > coming back and forth with different suggestions.
> >
> >
> > Thanks,
> >
> > Harish
> >
>