Thanks Shawn! That was just a small fix from my side. Thanks for your help!
On Tue, Feb 20, 2018 at 1:43 AM, Shawn Heisey wrote:
> On 2/19/2018 8:49 AM, Aakanksha Gupta wrote:
> > Thanks for the quick solution. It works. I just had to replace %20 to
> space
> > in query.addFilterQuery("timestamp
Thanks Erick.
On Tue, Feb 20, 2018 at 1:11 AM, Erick Erickson
wrote:
> Aakanksha:
>
> Be a little careful here, filter queries with timestamps can be
> tricky. The example you have is fine, but for end-points with finer
> granularity may be best if you don't cache them, see:
> https://lucidworks
On 2/19/2018 8:49 AM, Aakanksha Gupta wrote:
> Thanks for the quick solution. It works. I just had to replace %20 to space
> in query.addFilterQuery("timestamp:[151890840 TO 151891200]");
>
> Thanks a ton! :)
Right, I didn't even really look closely at what was in the fq
parameter, I just
Aakanksha:
Be a little careful here, filter queries with timestamps can be
tricky. The example you have is fine, but for end-points with finer
granularity may be best if you don't cache them, see:
https://lucidworks.com/2012/02/23/date-math-now-and-filter-queries/
Best,
Erick
On Mon, Feb 19, 201
Hi Shawn,
Thanks for the quick solution. It works. I just had to replace %20 to space
in query.addFilterQuery("timestamp:[151890840 TO 151891200]");
Thanks a ton! :)
On Mon, Feb 19, 2018 at 11:43 PM, Shawn Heisey
wrote:
> On 2/19/2018 6:44 AM, Aakanksha Gupta wrote:
>
>> http://localhos
On 2/19/2018 6:44 AM, Aakanksha Gupta wrote:
http://localhost:8983/solr/geoloc/select/?q=*:*&fq={!geofilt}&sfield=latlong&pt=-6.08165,145.8612430&d=100&wt=json&fq=timestamp:[151890840%20TO%20151891200]&fl=*,_dist_:geodist()
But I'm not sure how to build the SolrJ equivalent of this query
Thanks a lot, Shawn!
Dmitry
On Wed, Aug 13, 2014 at 4:22 PM, Shawn Heisey wrote:
> On 8/13/2014 5:11 AM, Dmitry Kan wrote:
> > OK, thanks. Can you please add my user name to the Contributor group?
> >
> > username: DmitryKan
>
> You are added. Edit away!
>
> Thanks,
> Shawn
>
>
--
Dmitry K
On 8/13/2014 5:11 AM, Dmitry Kan wrote:
> OK, thanks. Can you please add my user name to the Contributor group?
>
> username: DmitryKan
You are added. Edit away!
Thanks,
Shawn
OK, thanks. Can you please add my user name to the Contributor group?
username: DmitryKan
On Tue, Aug 12, 2014 at 5:41 PM, Shawn Heisey wrote:
> On 8/12/2014 3:57 AM, Dmitry Kan wrote:
> > Hi,
> >
> > is http://wiki.apache.org/solr/Support page immutable?
>
> All pages on that wiki are change
On 8/12/2014 3:57 AM, Dmitry Kan wrote:
> Hi,
>
> is http://wiki.apache.org/solr/Support page immutable?
All pages on that wiki are changeable by end users. You just need to
create an account on the wiki and then ask on this list to have your
wiki username added to the Contributor group.
Thanks,
t;
> -Original Message- From: Alexandre Rafalovitch
> Sent: Friday, August 8, 2014 9:12 AM
> To: solr-user
> Subject: Re: Help Required
>
>
> We don't mediate jobs offers/positions on this list. We help people to
> learn how to make these kinds of things yourself.
And the Solr Support list is where people register their available
consulting services:
http://wiki.apache.org/solr/Support
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Friday, August 8, 2014 9:12 AM
To: solr-user
Subject: Re: Help Required
We don't me
We don't mediate jobs offers/positions on this list. We help people to
learn how to make these kinds of things yourself. If you are a
developer, you may find that it would take only several days to get a
strong feel for Solr. Especially, if you start from tutorials/right
books.
To find developers,
Hi Otis
Your suggestion worked fine.
Thanks
kamal
On Sun, Jun 9, 2013 at 7:58 AM, Kamal Palei wrote:
> Though the syntax looks fine, but I get all the records. As per example
> given above I get all the documents, meaning filtering did not work. I am
> curious to know if my indexing went fine
Though the syntax looks fine, but I get all the records. As per example
given above I get all the documents, meaning filtering did not work. I am
curious to know if my indexing went fine or not. I will check and revert
back.
On Sun, Jun 9, 2013 at 7:21 AM, Otis Gospodnetic wrote:
> Try:
>
> ...
Also please note that for some documents, blocked_company_ids may not be
present as well. In such cases that document should be present in search
result as well.
BR,
Kamal
On Sun, Jun 9, 2013 at 7:07 AM, Kamal Palei wrote:
> Dear All
> I have a multi-valued field blocked_company_ids in index.
Try:
...&q=*:*&fq=-blocked_company_ids:5
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Sat, Jun 8, 2013 at 9:37 PM, Kamal Palei wrote:
> Dear All
> I have a multi-valued field blocked_company_ids in index.
>
> You can think like
>
> 1. document1 , blocked_company_ids: 1, 5, 7
Martin Iwanowski wrote:
How can I setup to run Solr as a service, so I don't need to have a SSH
connection open?
The advice that I was given on this very list was to use daemontools. I
set it up and it is really great - starts when the machine boots,
auto-restart on failures, easy to bring u
Yatir,
I actually think you may be OK with a single machine for 60M docs, though.
You should be able to quickly do a test where you use SolrJ to post to Solr and
get docs/second.
Use SOlr 1.3. Use 2-3 indexing threads going against a single Solr instance.
Increase the buffer size param and in
On Wed, 24 Sep 2008 11:45:34 -0400
Mark Miller <[EMAIL PROTECTED]> wrote:
> Nothing to stop you from breaking up the tsv/csv files into multiple
> tsv/csv files.
Absolutely agreeing with you ... in one system where I implemented SOLR, I
have a process run through the file system and lazily pick
Norberto Meijome wrote:
On Wed, 24 Sep 2008 07:46:57 -0400
Mark Miller <[EMAIL PROTECTED]> wrote:
Yes. You will def see a speed increasing by avoiding http (especially
doc at a time http) and using the direct csv loader.
http://wiki.apache.org/solr/UpdateCSV
and the obvious reason t
On Wed, 24 Sep 2008 07:46:57 -0400
Mark Miller <[EMAIL PROTECTED]> wrote:
> Yes. You will def see a speed increasing by avoiding http (especially
> doc at a time http) and using the direct csv loader.
>
> http://wiki.apache.org/solr/UpdateCSV
and the obvious reason that if, for whatever reason,
files as opposed to directly indexing each document via
http post?
-Original Message-
From: Mark Miller [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 24, 2008 2:12 PM
To: solr-user@lucene.apache.org
Subject: Re: help required: how to design a large scale solr system
From my
Hi,
I'm very new to search engines in general.
I've been using Zend_Search_Lucene class before to try Lucene in
general and though it surely works it's not what I'm looking for
performance wise.
I recently installed Solr on a newly installed Ubuntu (Hardy Heron)
machine.
I have about 20
@lucene.apache.org
Subject: Re: help required: how to design a large scale solr system
From my limited experience:
I think you might have a bit of trouble getting 60 mil docs on a single
machine. Cached queries will probably still be *very* fast, but non
cached queries are going to be very slow in
From my limited experience:
I think you might have a bit of trouble getting 60 mil docs on a single
machine. Cached queries will probably still be *very* fast, but non
cached queries are going to be very slow in many cases. Is that 5
seconds for all queries? You will never meet that on first r
Howard,
I think up two things:
1. double check external_cpc file is in D:/solr1/data there
and post to let solr read it.
2. DisMax query doesn't support "job_id:4901708 _val_:cpc" format
for query string. Just try q=cpc and see explain.
Thank you,
Koji
Howard Lee wrote:
Help required
27 matches
Mail list logo