the creds param must be included in the
> hashCode and equals logic.
>
> Joel Bernstein
> Search Engineer at Heliosearch
>
> On Wed, Oct 8, 2014 at 1:17 PM, Christopher Gross
> wrote:
>
> > Code:
> > http://pastebin.com/tNjzDbmy
> >
> > Solr 4.9.0
>
Code:
http://pastebin.com/tNjzDbmy
Solr 4.9.0
Tomcat 7
Java 7
I took Erik Hatcher's example for creating a PostFilter and have modified
it so it would work with Solr 4.x. Right now it works...the first time.
If I were to run this query it would work right:
http://localhost:8080/solr/plugintest/s
Thanks Hoss -- adding in the LengthFilterFactory did the trick.
-- Chris
On Mon, Sep 15, 2014 at 1:57 PM, Bryan Bende wrote:
> I ran into this problem as well when upgrading to Solr 4.8.1...
>
> We had a somewhat large binary field that was "indexex=false stored=true",
> but because of the copy
[sorry if this double posts -- I got an error on sending so I'm trying it
again..]
I'm storing the page content in a "string" in Solr -- for display later.
I'm indexing that content into a text field (text_en_splitting) for
full-text searching.
I'm getting an error on the "string" portion, but pe
that does tokenizing and stemming for plain text search anyway.
> >
> > Michael Della Bitta
> >
> > Applications Developer
> >
> > o: +1 646 532 3062
> >
> > appinions inc.
> >
> > “The Science of Influence Marketing”
> >
>
Solr 4.9.0
Java 1.7.0_49
I'm indexing an internal Wiki site. I was running on an older version of
Solr (4.1) and wasn't having any trouble indexing the content, but now I'm
getting errors:
SCHEMA:
LOGS:
Caused by: java.lang.IllegalArgumentException: Document contains at least
one immense term
I just got Solr 4.9.0 running as a 3 node cloud. I use the CloudSolrServer
class to connect and do queries, but it isn't working now using HTTPS. I
don't see any options for the CloudSolrServer to use https (no key/trust
store or anything).
What SolrJ classes should I be looking at to connect an
ell, like typos
("mulitValued" instead of "multiValued"), which left me in a similar state.
-- Chris
On Thu, Sep 4, 2014 at 3:28 PM, Anshum Gupta wrote:
> I'm just curious, do you know why the CREATE failed for you?
>
>
> On Thu, Sep 4, 2014 at 12:21 PM, Christopher G
cleanup things if stuff goes wrong. As I don't know,
> I'm not sure about it but did you already try it and things didn't
> work/clean up for you? If that's the case, was there an error that you
> noticed?
>
>
>
> On Thu, Sep 4, 2014 at 4:45 AM, Christopher
hing that I've found
is to wipe out the version-2 for all the zookeepers, restart them, then
reload my configs back in.
Thanks!
-- Chris
On Thu, Sep 4, 2014 at 4:12 AM, Shawn Heisey wrote:
> On 9/2/2014 11:44 AM, Christopher Gross wrote:
> > OK -- so I think my previous attempts w
Chris
On Tue, Sep 2, 2014 at 2:30 PM, Christopher Gross wrote:
> Is the solr.ssl.checkPeerName option available in 4.8.1? I have my
> Tomcat starting up with that as a -D option, but I'm getting an exception
> on validating the hostname w/ the cert...
>
> -- Chris
>
&
Is the solr.ssl.checkPeerName option available in 4.8.1? I have my Tomcat
starting up with that as a -D option, but I'm getting an exception on
validating the hostname w/ the cert...
-- Chris
On Tue, Sep 2, 2014 at 1:44 PM, Christopher Gross wrote:
> OK -- so I think my previous
OK -- so I think my previous attempts were causing the problem.
Since this is a dev environment (and is still empty), I just went ahead and
wiped out the "version-2" directories for the zookeeper nodes, reloaded my
solr collections, then ran that command (zkcli.sh in the solr distro).
That did work
Side note -- I've also tried adding the clusterprops.json file via
zookeeper's shell client on the command line, and within that client, all
with no luck.
-- Chris
On Tue, Sep 2, 2014 at 12:19 PM, Christopher Gross
wrote:
> Hi Hoss.
>
> I did finally stumble onto that doc
Hi Hoss.
I did finally stumble onto that document (just after I posted my last
message, of course).
Using bash shell.
I've now tried those steps:
Tomcat is stopped.
First I run:
./zkcli.sh -zkhost localhost:2181 -cmd put /clusterprops.json
'{"urlScheme":"https"}'
I confirm via the zookeeper-pr
SONParser$ParseException: Expected string:
char=\,position=1 BEFORE='{\' AFTER='"urlScheme\":\"https\"}'
I'm not getting a whole lot on searches for "clusterprops.json" -- any
advice would be appreciated.
-- Chris
On Tue, Sep 2, 2014 at 8:59 A
Solr 4.8.1
Java 1.7
Tomcat 7.0.50
Zookeeper 3.4.6
Trying to get a SolrCloud running with https only. I found this:
https://issues.apache.org/jira/browse/SOLR-3854
I don't have a clusterprops.json file, and running the zkCli command
doesn't add one either.
Command is along the lines of:
./zkCli.s
a handful of times.
I'll see about getting a new version in place soon. If it still happens,
I'll definitely log something in JIRA for it.
Thanks!
-- Chris
On Thu, Aug 7, 2014 at 4:07 PM, Shawn Heisey wrote:
> On 8/7/2014 1:46 PM, Christopher Gross wrote:
> > Solr 4.1
Solr 4.1, in SolrCloud mode. 3 nodes configured, Running in Tomcat 7 w/
Java 7.
I have a few cores set up, let's just call them A, B, C and D. They have
some uniquely named xslt files, but they all have a "rss.xsl" file.
Sometimes, on just 1 of the nodes, if I do a query for something in A and
Checked that first -- it's a test site with a small sample size. The field
is set in all of the items. And refreshing the query a few times can yield
either result (with/without the error).
I'm reverting back to an old version of my stack (my code, plus tomcat &
solr), I'll step through my previ
Solr 4.7.2 (and 4.6.1)
Tomcat 7.0.52
Java 1.7.0_45 (and _55)
I'm getting some really odd behavior with some XSLT documents. I've been
doing some upgrades to Java & Solr and I'm trying to narrow down where the
problems are happening.
I have a few XSLT docs that I put into the conf/xslt directory
it's also a
> single-transform.
>
> Are you satisfying both of those conditions? If so, it's probably ok
> to just ignore the warning.
>
> Regards,
>Alex.
> Personal website: http://www.outerthoughts.com/
> Current project: http://www.solr-start.com/ - Accelerating
ciently high
> value."
> );
> }
>
>
>
>
>
> On Thursday, May 1, 2014 11:29 PM, Christopher Gross
> wrote:
> I get this warning when Solr (4.7.2) Starts:
> WARN org.apache.solr.util.xslt.TransformerProvider â The
> TransformerProvid
I get this warning when Solr (4.7.2) Starts:
WARN org.apache.solr.util.xslt.TransformerProvider â The
TransformerProvider's simplistic XSLT caching mechanism is not appropriate
for high load scenarios, unless a single XSLT transform is used and
xsltCacheLifetimeSeconds is set to a sufficiently hi
gt; On 4/24/2014 9:44 AM, Christopher Gross wrote:
>
>> These get added to the startup of Tomcat:
>> -DhostPort=8181 -Djetty.port=8181
>> -DzkHost=localhost:2181,localhost:2182,localhost:2183,
>> localhost:2184,localhost:2185
>> -Dbootstrap_conf=true -Dport=818
These get added to the startup of Tomcat:
-DhostPort=8181 -Djetty.port=8181
-DzkHost=localhost:2181,localhost:2182,localhost:2183,localhost:2184,localhost:2185
-Dbootstrap_conf=true -Dport=8181 -DhostContext=solr
-DzkClientTimeout=2
-- Chris
On Thu, Apr 24, 2014 at 11:41 AM, Rafał Kuć wro
Running Solr 4.6.1, Tomcat 7.0.29, Zookeeper 3.4.6, Java 6
I have 3 Tomcats running, each with their own Solr war, all on the same
box, along with 5 ZK nodes. It's a dev box.
I can get the SolrCloud up and running, then use the Collections API to get
everything going. It's all fine until I stop
I get both of these errors a few times in my tomcat (7.0.52) catalina.out
logfile:
2014-04-02 13:22:32,026 WARN org.apache.solr.schema.FieldTypePluginLoader
- TokenFilterFactory is using deprecated LUCENE_33 emulation. You should at
some point declare and reindex to at least 4.0, because 3.x emul
Running Apache Solr 4.5 on Tomcat 7.0.29, Java 1.6_30. 3 SolrCloud nodes
running. 5 ZK nodes (v 3.4.5), one on each SolrCloud server, and on 2
other servers.
I want to create a collection on all 3 nodes. I only need 1 shard. The
config is in Zookeeper (another collection is using it)
http://s
. I've found that most
> times, if the index isn't the one specified in index.properties then `lsof`
> won't show Solr as using it.
> >
> > FWIW I'm pretty sure there's a bug in Jira about old indexes not getting
> purged but I can't find it rig
I have Solr 4.1 running in the SolrCloud mode. My largest collection has 2
index directories (and an index.properties & replication.properties in that
directory). Is it safe to remove the older index not listed in
index.properties? I'm running low on disk space, otherwise I'd have just
left it a
In Solr 4.5, I'm trying to create a new collection on the fly. I have a
data dir with the index that should be in there, but the CREATE command
makes the directory be:
_shard1_replicant#
I was hoping that making a collection named something would use a directory
with that name to let me use the d
that.
Thanks Shawn -- I have a much better understanding of all this now.
-- Chris
On Thu, Oct 17, 2013 at 7:31 PM, Shawn Heisey wrote:
> On 10/17/2013 12:51 PM, Christopher Gross wrote:
>
>> OK, super confused now.
>>
>> http://index1:8080/solr/admin/**cores?action
something like:
>
> {"collection":{"AdWorksQuery":"AdWorks"}}
>
> Or access the Zookeeper instance, and do a 'get /aliases.json'.
>
> -Original Message-
> From: Christopher Gross [mailto:cogr...@gmail.com]
> Sent: Thursday, Octobe
ork. How do I use an alias when it gets made?
-- Chris
On Thu, Oct 17, 2013 at 2:51 PM, Christopher Gross wrote:
> OK, super confused now.
>
>
> http://index1:8080/solr/admin/cores?action=CREATE&name=test2&collection=test2&numshards=1&replicationFactor=3
>
to the solr.xml file,
and restart tomcat.
Is there a primer that I'm missing for how to do this?
Thanks.
-- Chris
On Wed, Oct 16, 2013 at 2:59 PM, Christopher Gross wrote:
> Thanks Shawn, the explanations help bring me forward to the "SolrCloud"
> mentality.
>
> So i
To avoid the overhead, could you put Solr on a separate VLAN (with ACLs to
> client servers)?
>
> Cheers,
>
> Tim
>
>
> On 12 October 2013 17:30, Shawn Heisey wrote:
>
> > On 10/11/2013 9:38 AM, Christopher Gross wrote:
> > > On Fri, Oct 11, 2013 at 11:08
ve got going now.
Thanks again!
-- Chris
On Wed, Oct 16, 2013 at 2:40 PM, Shawn Heisey wrote:
> On 10/16/2013 11:51 AM, Christopher Gross wrote:
> > Ok, so I think I was confusing the terminology (still in a 3.X mindset I
> > guess.)
> >
> > From the Cloud->
be on each of the index1, index2 and index3
instances of Solr?
-- Chris
On Wed, Oct 16, 2013 at 12:40 PM, Shawn Heisey wrote:
> On 10/16/2013 9:44 AM, Christopher Gross wrote:
> > Garth,
> >
> > I think I get what you're saying, but I want to make sure.
> >
t; When you're ready, point the 'query' alias to core1new.
>
> You're now running completely on core1new, and can use the Collection API
> to delete core1 from the cloud. Or keep it around as a backup to which you
> can restore simply by changing 'query' a
nd index3 update to
have "core1new" data?
Thanks again!
-- Chris
On Tue, Oct 15, 2013 at 7:30 PM, Shawn Heisey wrote:
> On 10/15/2013 2:17 PM, Christopher Gross wrote:
>
>> I have 3 Solr nodes (and 5 ZK nodes).
>>
>> For #1, would I have to do that on all o
On Tue, Oct 15, 2013 at 3:02 PM, Shawn Heisey wrote:
> On 10/15/2013 12:36 PM, Christopher Gross wrote:
>
>> In Solr 3.x, whenever I'd reindex content, I'd fill up one instance, copy
>> the whole "data" directory over to the second (or third) instance and
In Solr 3.x, whenever I'd reindex content, I'd fill up one instance, copy
the whole "data" directory over to the second (or third) instance and then
restart that Tomcat to get the indexes lined up.
With Solr 4.1, I'm guessing that I can't go and do that without taking down
all of my nodes and maki
On Fri, Oct 11, 2013 at 11:08 AM, Shawn Heisey wrote:
> On 10/11/2013 8:17 AM, Christopher Gross wrote:
> > Is there a spot in a Solr configuration that I can set this up to use
> HTTPS?
>
> From what I can tell, not yet.
>
> https://issues.apache.org/jira/
I have 3 SolrCloud nodes (call them idx1, idx2, idx3), and the boxes have
SSL & certs configured on them to protect the Solr Indexes.
Right now, I can do queries on idx1 and it works fine.
If I try to query on idx3, I get:
org.apache.solr.common.SolrException:
org.apache.sorl.client.solrj.SolrServ
In 3.x Solr (and earlier) I was able to create a new xslt doc in the
conf/xslt directory and immediately start using it.
In my 4.1 setup, I have:
5
But after that small wait I still can't use it. Is there another setting
that I'm missing somewhere? I am using SolrCloud, do I need to h
rride that and be explicit if
> it's guessing wrong. If you have nodes on different machines, you don't
> want it to be localhost.
>
> Next, look at the logs. They should give a clue why the replicas can't
> recover from the leader.
>
> - Mark
>
> On Feb 27,
I've been trying out Solr 4 -- I was able to get it working with 3
instances of Tomcat on the same box (different ports), and 5 Zookeeper
nodes on that box as well. I've started to get my production layout going,
but I can't seem to get the Solr to replicate among the nodes.
I can see that the So
single zk host in the zk host string initially. That might make
> it easier to track down why it won't connect. It's tough to diagnose
> because the root exception is being swallowed - it's likely a connect to zk
> failed exception though.
>
> - Mark
>
> On Jan 10, 20
two field definitions. I mean, is it possible that
> you might have "omit positions" on the region field?
>
> -- Jack Krupansky
>
> -Original Message- From: Christopher Gross
> Sent: Wednesday, November 07, 2012 7:15 AM
> To: solr-user
I have a "keyword" field type that I made:
I'm running Solr 3.4. The past 2 months I've been getting a lot of
write.lock errors. I switched to the "simple" lockType (and made it
clear the lock on restart), but my index is still locking up a few
times a week.
I can't seem to determine what is causing the locks -- does anyone out
there hav
x?
>
> What are the attributes and field type for the "people" field?
>
> -- Jack Krupansky
>
> -Original Message- From: Christopher Gross
> Sent: Tuesday, June 12, 2012 11:05 AM
> To: solr-user
> Subject: Different sort for each facet
>
>
&g
In Solr 3.4, is there a way I can sort two facets differently in the same query?
If I have:
http://mysolrsrvr/solr/select?q=*:*&facet=true&facet.field=people&facet.field=category
is there a way that I can sort people by the count and category by the
name all in one query? Or do I need to do tha
ed. Maybe look at alternatives that don't tokenize
> fields. Just a guess here though. Good luck.
>
> On Fri, 13 Jan 2012 09:04:00 -0500, Christopher Gross
> wrote:
>> My index has a multi-valued String field called "tag" that is used to
>> store a category/key
My index has a multi-valued String field called "tag" that is used to
store a category/keyword for the item the record is about. I made a
faceted query in order to find out all the different tags that are
stored in the index:
http://localhost:8080/solr/select?q=*:*&facet=on&facet.field=tag&facet.
I'm getting different results running these queries:
http://localhost:8080/solr/select?&q=*:*&fq=source:wiki&fq=tag:car&sort=score+desc,dateSubmitted+asc&fl=title,score,dateSubmitted&rows=100
http://localhost:8080/solr/select?fq=source:wiki&q=tag:car&sort=score+desc,dateSubmitted+desc&fl=title,sc
Ha, sorry Hoss. Thought i hit user@nutch, gmail did the replace and I
wasn't paying attention.
-- Chris
On Fri, Dec 16, 2011 at 2:46 PM, Chris Hostetter
wrote:
>
> : http://wiki.apache.org/nutch/Crawl
> :
> : This script no longer works. See:
>
> If you have a question about something on the
http://wiki.apache.org/nutch/Crawl
This script no longer works. See:
echo "- Index (Step 5 of $steps) -"
$NUTCH_HOME/bin/nutch index crawl/NEWindexes crawl/crawldb crawl/linkdb \
crawl/segments/*
The "index" call doesn't existso what does this line get replaced
with? Is there an
#x27;t be stored though (unless you just want to verify
> for debugging).
>
> -Yonik
> http://www.lucidimagination.com
>
>
>
> On Fri, Oct 28, 2011 at 9:35 AM, Christopher Gross wrote:
>> Hi Yonik.
>>
>> I never made a dynamicField definition for _latLon ... I was
unding box giving results outside the specified
range. Or would I be better off just indexing a lat & lon in separate
fields, then making a normal numeric ranged search against them.
-- Chris
On Thu, Oct 27, 2011 at 3:09 PM, Yonik Seeley
wrote:
> On Thu, Oct 27, 2011 at 2:34 PM, Christo
I'm using the geohash field to store points for my data. When I do a
bounding box like:
localhost:8080/solr/select?q=point:[-45,-80%20TO%20-24,-39]
I get a data point that falls outside the box: (-73.03358 -50.46815)
The Spatial Search (http://wiki.apache.org/solr/SpatialSearch) pag
Sorry, lack of sleep made me see an extra "0" in there.
I haven't had this issue -- but after every batch of items that I post
into Solr with SolrJ I run the commit() routine on my instance of the
CommonsHttpSolrServer, so they show up immediately. You could try
altering your code to do that, or
See:
http://wiki.apache.org/solr/SolrConfigXml
The example in the wiki is:
1
86000
So since you have yours set to 3, that translates to 30,000 ms,
which is 5 minutes. If you want the autocommit feature to trigger
more often, you could decrease the number. Droppin
;
> http://www.w3.org/TR/xpath-functions/#func-not
>
> So, not(contains)) rather than not contains() should presumably do
> the trick.
>
> -Original Message-
> From: Christopher Gross [mailto:cogr...@gmail.com]
> Sent: Thursday, August 18, 2011 7:44 AM
> To
I'm using Solr 3.3, trying to run an XSLT translation on the results
of a query. The xsl file worked just fine for Solr 1.4.1, but I'm
having trouble with the newer version.
The root cause is:
javax.xml.transform.TransformerException: Extra illegal tokens:
'contains', '(', '$', 'posted', ',', ''0
010 at 11:10 AM, Christopher Gross wrote:
> Hi all.
>
> I have designed a synchronizer that goes out to various databases,
> extracts some data, does some processing, and then uses the
> StreamingUpdateSolrServer to send the records to a Solr index. When
> everything is up, it works j
erver
so that I can know which records it was unable to send, and then pull them
out in order to try running them again later? Any insight that anyone has
would be greatly appreciated.
Thanks!
-- Christopher Gross
Thanks Hoss, I'll look into that!
-- Chris
On Tue, Nov 9, 2010 at 1:43 PM, Chris Hostetter wrote:
>
> : one large index. I need to create a unique key for the Solr index that
> will
> : be unique per document. If I have 3 systems, and they all have a
> document
> : with id=1, then I need to c
7;t sure if I was missing something.
Thanks again!
-- Chris
On Tue, Nov 9, 2010 at 10:47 AM, Ken Stanley wrote:
> On Tue, Nov 9, 2010 at 10:39 AM, Christopher Gross
> wrote:
> > I'm trying to use Solr to store information from a few different sources
> in
> > one large
I'm trying to use Solr to store information from a few different sources in
one large index. I need to create a unique key for the Solr index that will
be unique per document. If I have 3 systems, and they all have a document
with id=1, then I need to create a "uniqueId" field in my schema that
c
I have SolrJ working, but it is running slower than I'd like. The logger
spits out a lot of things that make me feel like I don't have something
configured quite right. Instead of holding on to the connection, I think
that it closes it each time. I'd want it to stay open -- I'm trying to do a
cl
!
Thanks!
-- Chris
On Thu, Sep 30, 2010 at 4:40 PM, Christopher Gross wrote:
> I have also tried using SolrJ to hit my index, and I get this error:
>
> 2010-09-30 16:23:14,406 [pool-2-thread-1] DEBUG
> org.apache.commons.httpclient.params.DefaultHttpParams - Set parameter
> http.us
I have also tried using SolrJ to hit my index, and I get this error:
2010-09-30 16:23:14,406 [pool-2-thread-1] DEBUG
org.apache.commons.httpclient.params.DefaultHttpParams - Set parameter
http.useragent = Jakarta Commons-HttpClient/3.0
2010-09-30 16:23:14,406 [pool-2-thread-1] DEBUG
org.apache.com
Now I feel dumb, it was right there. Thanks! :)
-- Chris
On Thu, Sep 30, 2010 at 3:04 PM, Allistair Crossley wrote:
> it's in the dist folder with the name provided by the wiki page you refer
> to
>
> On Sep 30, 2010, at 3:01 PM, Christopher Gross wrote:
>
> > Where
Where can I get SolrJ? The wiki makes reference to it, and says that it is
a part of the Solr builds that you download, but I can't find it in the jars
that come with it. Can anyone shed some light on this for me?
Thanks!
-- Chris
I'm writing some code that pushes data into a Solr instance. I have my
Tomcat (5.5.28) set up to use 2 indexes, I'm hitting the second one for
this.
I try to issue the basic command to clear out the index
(*:*), and I get the error posted below
back.
Does anyone have an idea of what I'm missing o
Hi Andy!
I configured this a few days ago, and found a good resource --
http://wiki.apache.org/solr/MultipleIndexes
That page has links that will give you the instructions for setting up
Tomcat, Jetty and Resin. I used the Tomcat ones the other day, and it gave
me everything that I needed to get
On Mon, Sep 20, 2010 at 4:54 PM, Christopher Gross wrote:
> Thanks Jak! That was just what I was looking for!
>
> -- Chris
>
>
>
> On Mon, Sep 20, 2010 at 4:25 PM, Jak Akdemir wrote:
>> It is quite easy to modify its default value. Solr is using default
>> logg
RNING or INFO too.
>
> You can observe changes from http://localhost:8080/solr/admin/logging
> or simply ~/admin/logging pages.
>
> Details are here:
>
> http://wiki.apache.org/tomcat/Logging_Tutorial
>
> http://tomcat.apache.org/tomcat-6.0-doc/logging.html
>
> Jak
I'm running an old version of Solr (1.2) on Apache Tomcat 5.5.25.
Right now the logs all go to the catalina.out file, which has been
growing rather large. I have to shut down the servers periodically to
clear out that logfile because it keeps getting large and giving disk
space warnings.
I've tri
back to paged results.
>
> wunder
>
> On Sep 17, 2010, at 5:23 AM, Christopher Gross wrote:
>
>> @Markus Jelsma - the wiki confirms what I said before:
>> rows
>>
>> This parameter is used to paginate results from a query. When
>> specified, it indicates
SIN UP UR CONTENT!#{doc[:content]}"
>> end
>> # Add it back in to Solr
>> solr.add(docs)
>> solr.commit
>> end
>>
>> Scott
>>
>> On Thu, Sep 16, 2010 at 2:27 PM, Shashi Kant wrote:
>>>
>>> Start with a *:*, then the
That will stil just return 10 rows for me. Is there something else in
the configuration of solr to have it return all the rows in the
results?
-- Chris
On Thu, Sep 16, 2010 at 4:43 PM, Shashi Kant wrote:
> q=*:*
>
> On Thu, Sep 16, 2010 at 4:39 PM, Christopher Gross wrote:
>&
I have some queries that I'm running against a solr instance (older,
1.2 I believe), and I would like to get *all* the results back (and
not have to put an absurdly large number as a part of the rows
parameter).
Is there a way that I can do that? Any help would be appreciated.
-- Chris
I changed my schema to use the "tdouble" that the link above describes:
and I'm able to do the search correctly now.
-- Chris
On Tue, May 11, 2010 at 11:37 AM, Christopher Gross wrote:
> The lines from the schema.xml:
>
>
> required="false"
;failures"..
>
> Best
> Erick
>
> On Tue, May 11, 2010 at 9:53 AM, Christopher Gross >wrote:
>
> > I've stored some geo data in SOLR, and some of the coordinates are
> negative
> > numbers. I'm having trouble getting a range to work.
> &g
I've stored some geo data in SOLR, and some of the coordinates are negative
numbers. I'm having trouble getting a range to work.
Using the query tool in the admin interface, I can get something like:
lon:[* TO 0]
to work to list out everything with a negative longitude, but if I try to do
somet
88 matches
Mail list logo