Thanks Shawn! That was just a small fix from my side. Thanks for your help!
On Tue, Feb 20, 2018 at 1:43 AM, Shawn Heisey wrote:
> On 2/19/2018 8:49 AM, Aakanksha Gupta wrote:
> > Thanks for the quick solution. It works. I just had to replace %20 to
> space
> > in query.addFilterQuery("timestamp
Thanks Erick.
On Tue, Feb 20, 2018 at 1:11 AM, Erick Erickson
wrote:
> Aakanksha:
>
> Be a little careful here, filter queries with timestamps can be
> tricky. The example you have is fine, but for end-points with finer
> granularity may be best if you don't cache them, see:
> https://lucidworks
On 2/19/2018 8:49 AM, Aakanksha Gupta wrote:
> Thanks for the quick solution. It works. I just had to replace %20 to space
> in query.addFilterQuery("timestamp:[151890840 TO 151891200]");
>
> Thanks a ton! :)
Right, I didn't even really look closely at what was in the fq
parameter, I just
Aakanksha:
Be a little careful here, filter queries with timestamps can be
tricky. The example you have is fine, but for end-points with finer
granularity may be best if you don't cache them, see:
https://lucidworks.com/2012/02/23/date-math-now-and-filter-queries/
Best,
Erick
On Mon, Feb 19, 201
Hi Shawn,
Thanks for the quick solution. It works. I just had to replace %20 to space
in query.addFilterQuery("timestamp:[151890840 TO 151891200]");
Thanks a ton! :)
On Mon, Feb 19, 2018 at 11:43 PM, Shawn Heisey
wrote:
> On 2/19/2018 6:44 AM, Aakanksha Gupta wrote:
>
>> http://localhos
On 2/19/2018 6:44 AM, Aakanksha Gupta wrote:
http://localhost:8983/solr/geoloc/select/?q=*:*&fq={!geofilt}&sfield=latlong&pt=-6.08165,145.8612430&d=100&wt=json&fq=timestamp:[151890840%20TO%20151891200]&fl=*,_dist_:geodist()
But I'm not sure how to build the SolrJ equivalent of this query
Hi all,
I'm looking for some help with SolrJ for querying spatial data. I have the
following URL query working fine, where it returns the results which are
within 100km radius from the 'pt' provided in the URL and where the
timestamp field is between the two timestamps provided in the URL. It also
On 16 February 2016 at 06:09, Midas A wrote:
> Susheel,
>
> Is there any client available in php for solr cloud which maintain the same
> ??
>
>
No there is none. I recommend HAProxy for Non SolrJ clients and
loadbalancing SolrCloud.
HAProxy makes it also easy to do rolling updates of your SolrCl
Susheel,
Is there any client available in php for solr cloud which maintain the same
??
On Tue, Feb 16, 2016 at 7:31 AM, Susheel Kumar
wrote:
> In SolrJ, you would use CloudSolrClient which interacts with Zookeeper
> (which maintains Cluster State). See CloudSolrClient API. So that's how
> Sol
In SolrJ, you would use CloudSolrClient which interacts with Zookeeper
(which maintains Cluster State). See CloudSolrClient API. So that's how
SolrJ would know which node is down or not.
Thanks,
Susheel
On Mon, Feb 15, 2016 at 12:07 AM, Midas A wrote:
> Erick,
>
> We are using php for our app
Erick,
We are using php for our application so client would you suggest .
currently we are using pecl solr client .
but i want to understand that suppose we sent a request to a node and that
node is down that time how solrj figure out where request should go.
On Fri, Feb 12, 2016 at 9:44 PM,
bq: in case of solrcloud architecture we need not to have load balancer
First, my comment about a load balancer was for the master/slave
architecture where the load balancer points to the slaves.
Second, for SolrCloud you don't necessarily need a load balancer as
if you're using a SolrJ client re
Erick ,
bq: We want the hits on solr servers to be distributed
True, this happens automatically in SolrCloud, but a simple load
balancer in front of master/slave does the same thing.
Midas : in case of solrcloud architecture we need not to have load balancer
? .
On Thu, Feb 11, 2016 at 11:42 PM
bq: We want the hits on solr servers to be distributed
True, this happens automatically in SolrCloud, but a simple load
balancer in front of master/slave does the same thing.
bq: what if master node fail what should be our fail over strategy ?
This is, indeed one of the advantages for SolrCloud
@Jack
Currently we have around 55,00,000 docs
Its not about load on one node we have load on different nodes at different
times as our traffic is huge around 60k users at a given point of time
We want the hits on solr servers to be distributed so we are planning to
move on solr cloud as it would
hi,
what if master node fail what should be our fail over strategy ?
On Wed, Feb 10, 2016 at 9:12 PM, Jack Krupansky
wrote:
> What exactly is your motivation? I mean, the primary benefit of SolrCloud
> is better support for sharding, and you have only a single shard. If you
> have no need for s
What exactly is your motivation? I mean, the primary benefit of SolrCloud
is better support for sharding, and you have only a single shard. If you
have no need for sharding and your master-slave replicated Solr has been
working fine, then stick with it. If only one machine is having a load
problem,
What is the size of your index, hardware specs, average query load, rate of
Indexing?
On Wed, 10 Feb 2016, 14:14 kshitij tyagi
wrote:
> Hi,
>
> We are currently using solr 5.2 and I need to move on solr cloud
> architecture.
>
> As of now we are using 5 machines :
>
> 1. I am using 1 master wher
Hi,
We are currently using solr 5.2 and I need to move on solr cloud
architecture.
As of now we are using 5 machines :
1. I am using 1 master where we are indexing ourdata.
2. I replicate my data on other machines
One or the other machine keeps on showing high load so I am planning to
move on s
Thanks a lot, Shawn!
Dmitry
On Wed, Aug 13, 2014 at 4:22 PM, Shawn Heisey wrote:
> On 8/13/2014 5:11 AM, Dmitry Kan wrote:
> > OK, thanks. Can you please add my user name to the Contributor group?
> >
> > username: DmitryKan
>
> You are added. Edit away!
>
> Thanks,
> Shawn
>
>
--
Dmitry K
On 8/13/2014 5:11 AM, Dmitry Kan wrote:
> OK, thanks. Can you please add my user name to the Contributor group?
>
> username: DmitryKan
You are added. Edit away!
Thanks,
Shawn
OK, thanks. Can you please add my user name to the Contributor group?
username: DmitryKan
On Tue, Aug 12, 2014 at 5:41 PM, Shawn Heisey wrote:
> On 8/12/2014 3:57 AM, Dmitry Kan wrote:
> > Hi,
> >
> > is http://wiki.apache.org/solr/Support page immutable?
>
> All pages on that wiki are change
On 8/12/2014 3:57 AM, Dmitry Kan wrote:
> Hi,
>
> is http://wiki.apache.org/solr/Support page immutable?
All pages on that wiki are changeable by end users. You just need to
create an account on the wiki and then ask on this list to have your
wiki username added to the Contributor group.
Thanks,
t;
> -Original Message- From: Alexandre Rafalovitch
> Sent: Friday, August 8, 2014 9:12 AM
> To: solr-user
> Subject: Re: Help Required
>
>
> We don't mediate jobs offers/positions on this list. We help people to
> learn how to make these kinds of things yourself.
And the Solr Support list is where people register their available
consulting services:
http://wiki.apache.org/solr/Support
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Friday, August 8, 2014 9:12 AM
To: solr-user
Subject: Re: Help Required
We don't me
We don't mediate jobs offers/positions on this list. We help people to
learn how to make these kinds of things yourself. If you are a
developer, you may find that it would take only several days to get a
strong feel for Solr. Especially, if you start from tutorials/right
books.
To find developers,
Dear Sirs,
I wonder if you can help me?
I'm looking for a developer who uses Solr to build for me a facted seach
facilty using location. In a nutshell, I need this funtionality as in here:
www.citypantry.com
wwwdinein.
Here the vendor via google maps enters the area/radius they cover which en
Hi Otis
Your suggestion worked fine.
Thanks
kamal
On Sun, Jun 9, 2013 at 7:58 AM, Kamal Palei wrote:
> Though the syntax looks fine, but I get all the records. As per example
> given above I get all the documents, meaning filtering did not work. I am
> curious to know if my indexing went fine
Though the syntax looks fine, but I get all the records. As per example
given above I get all the documents, meaning filtering did not work. I am
curious to know if my indexing went fine or not. I will check and revert
back.
On Sun, Jun 9, 2013 at 7:21 AM, Otis Gospodnetic wrote:
> Try:
>
> ...
Also please note that for some documents, blocked_company_ids may not be
present as well. In such cases that document should be present in search
result as well.
BR,
Kamal
On Sun, Jun 9, 2013 at 7:07 AM, Kamal Palei wrote:
> Dear All
> I have a multi-valued field blocked_company_ids in index.
Try:
...&q=*:*&fq=-blocked_company_ids:5
Otis
--
Solr & ElasticSearch Support
http://sematext.com/
On Sat, Jun 8, 2013 at 9:37 PM, Kamal Palei wrote:
> Dear All
> I have a multi-valued field blocked_company_ids in index.
>
> You can think like
>
> 1. document1 , blocked_company_ids: 1, 5, 7
Dear All
I have a multi-valued field blocked_company_ids in index.
You can think like
1. document1 , blocked_company_ids: 1, 5, 7
2. document2 , blocked_company_ids: 2, 6, 7
3. document3 , blocked_company_ids: 4, 5, 6
and so on .
If I want to retrieve all the documents where blocked_compan
Thanks Erick,
On Fri, May 28, 2010 at 2:17 PM, Erik Hatcher wrote:
> You've tagged facet queries, but looks like you might want to use the
> "excl"ude capability on your filter queries also. Filter queries are
> additive, constraining the results further for each one, and by default
> faceting
You've tagged facet queries, but looks like you might want to use the
"excl"ude capability on your filter queries also. Filter queries are
additive, constraining the results further for each one, and by
default faceting is based off the search results. Use excl to have
facets count outsid
Hi All,
I have a use case where I have to tag facet queries.
Here is the code snippet for what I tried:
query.addFilterQuery("{!tag=NE}med:Blog AND slev:neutral");
query.addFacetQuery("{!tag=NE key=BLOG}med:Blog AND slev:neutral");
query.addFilterQuery("{!tag=P}med:Review AND slev:neutral");
query
: Now I am trying *index-time *boosting to improve response time. So i created
: an algorithm where I do the following:-
: 1. sort the records i get from database on approval_dt asc and increase the
: boost value of the element for approval_dt by 0.1 as i encounter
: higer approval_dt records. If
Hi,
Sending this mail again after I joined the sol-user group..Kindly find time
to help.
Thanks and Rgds,
Anil
-- Forwarded message --
From: Anil Cherian
Date: Fri, Nov 13, 2009 at 3:48 PM
Subject: solr index-time boost... help required please
To: solr-user@lucene.apache.org, solr
Martin Iwanowski wrote:
How can I setup to run Solr as a service, so I don't need to have a SSH
connection open?
The advice that I was given on this very list was to use daemontools. I
set it up and it is really great - starts when the machine boots,
auto-restart on failures, easy to bring u
- Solr - Nutch
- Original Message
> From: "Ben Shlomo, Yatir" <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Wednesday, September 24, 2008 2:50:54 AM
> Subject: help required: how to design a large scale solr system
>
> Hi!
>
> I am already
On Wed, 24 Sep 2008 11:45:34 -0400
Mark Miller <[EMAIL PROTECTED]> wrote:
> Nothing to stop you from breaking up the tsv/csv files into multiple
> tsv/csv files.
Absolutely agreeing with you ... in one system where I implemented SOLR, I
have a process run through the file system and lazily pick
Norberto Meijome wrote:
On Wed, 24 Sep 2008 07:46:57 -0400
Mark Miller <[EMAIL PROTECTED]> wrote:
Yes. You will def see a speed increasing by avoiding http (especially
doc at a time http) and using the direct csv loader.
http://wiki.apache.org/solr/UpdateCSV
and the obvious reason t
On Wed, 24 Sep 2008 07:46:57 -0400
Mark Miller <[EMAIL PROTECTED]> wrote:
> Yes. You will def see a speed increasing by avoiding http (especially
> doc at a time http) and using the direct csv loader.
>
> http://wiki.apache.org/solr/UpdateCSV
and the obvious reason that if, for whatever reason,
files as opposed to directly indexing each document via
http post?
-Original Message-
From: Mark Miller [mailto:[EMAIL PROTECTED]
Sent: Wednesday, September 24, 2008 2:12 PM
To: solr-user@lucene.apache.org
Subject: Re: help required: how to design a large scale solr system
From my
Hi,
I'm very new to search engines in general.
I've been using Zend_Search_Lucene class before to try Lucene in
general and though it surely works it's not what I'm looking for
performance wise.
I recently installed Solr on a newly installed Ubuntu (Hardy Heron)
machine.
I have about 20
@lucene.apache.org
Subject: Re: help required: how to design a large scale solr system
From my limited experience:
I think you might have a bit of trouble getting 60 mil docs on a single
machine. Cached queries will probably still be *very* fast, but non
cached queries are going to be very slow in
From my limited experience:
I think you might have a bit of trouble getting 60 mil docs on a single
machine. Cached queries will probably still be *very* fast, but non
cached queries are going to be very slow in many cases. Is that 5
seconds for all queries? You will never meet that on first r
Hi!
I am already using solr 1.2 and happy with it.
In a new project with very tight dead line (10 development days from
today) I need to setup a more ambitious system in terms of scale
Here is the spec:
* I need to index about 60,000,000
documents
* E
Howard,
I think up two things:
1. double check external_cpc file is in D:/solr1/data there
and post to let solr read it.
2. DisMax query doesn't support "job_id:4901708 _val_:cpc" format
for query string. Just try q=cpc and see explain.
Thank you,
Koji
Howard Lee wrote:
Help required with external value source SOLR-351
I'm trying to get this new feature to work without much success. I've
completed the following steps.
1) dowloaded latest nightly build
2) added the following to schema.xml
and
3) Created a file in the solr index folder - "ext
49 matches
Mail list logo