I believe that what you need is spatial search...
Have a look a the documention: http://wiki.apache.org/solr/SpatialSearch
On Wed, Feb 29, 2012 at 10:54 PM, Venu Shankar wrote:
> Hello,
>
> I have a design question for Solr.
>
> I work for an enterprise which has a lot of retail stores (approx.
On Thu, Mar 1, 2012 at 12:27 AM, Jamie Johnson wrote:
> Is there a ticket around doing this?
Around splitting shards?
The easiest thing to consider is just splitting a single shard in two
reusing some of the existing buffering/replication mechanisms we have.
1) create two new shards to represent
Mark,
Is there a ticket around doing this? If the work/design was written
down somewhere the community might have a better idea of how exactly
we could help.
On Wed, Feb 29, 2012 at 11:21 PM, Mark Miller wrote:
>
> On Feb 28, 2012, at 9:33 AM, Jamie Johnson wrote:
>
>> where specifically this i
We actually do currently batch updates - we are being somewhat loose when we
say a document at a time. There is a buffer of updates per replica that gets
flushed depending on the requests coming through and the buffer size.
- Mark Miller
lucidimagination.com
On Feb 28, 2012, at 3:38 AM, eks dev
On Feb 28, 2012, at 9:33 AM, Jamie Johnson wrote:
> where specifically this is on the roadmap for SolrCloud. Anyone
> else have those details?
I think we would like to do this sometime in the near future, but I don't know
exactly what time frame fits in yet. There is a lot to do still, and we
Doh! Sorry - this was broken - I need to fix the doc or add it back.
The shard id is actually set in solr.xml since its per core - the sys prop
was a sugar option we had setup. So either add 'shard' to the core in
solr.xml, or to make it work like it does in the doc, do:
That sets shard to the
Do you have a _version_ field in your schema? I actually just came back to
this thread with that thought and then saw your error - so that remains my
guess.
I'm going to improve the doc on the wiki around what needs to be defined
for SolrCloud - so far we have things in the example defaults, but i
Boom!
This works: sort=map(query($qq,-1),0, ,
1)+desc,score+desc&qq=domain:domainA
Thanks,
Mike
On Wed, Feb 29, 2012 at 3:45 PM, Mike Austin wrote:
> I have content that I index for several different domains. What I'd like
> to do is have all search results found for domainA returned
Hello,
I have a design question for Solr.
I work for an enterprise which has a lot of retail stores (approx. 20K).
These retail stores are spread across the world. My search requirement is
to find all the cities which are within x miles of a retail store.
So lets say if we have a retail Store i
I have content that I index for several different domains. What I'd like
to do is have all search results found for domainA returned first and
results for domainB,C,D..etc.. returned second. I could do two different
searches but was wondering if there was a way to only do one query but
return res
No. But probably we can find another way to do what you want. Please
describe the problem and include some "numbers" to give us an idea of
the sizes that you are handling. Number of documents, size of the
index, etc.
Thanks
Emmanuel
2012/2/29 Michael Jakl :
> Our Solr started to throw the followi
I think that what you want is FieldCollapsing:
http://wiki.apache.org/solr/FieldCollapsing
For example
&q=my search&group=true&group.field=subject&group.limit=5
Test it to see if that is what you want.
Thanks
Emmanuel
2012/2/29 Paul :
> Let's say that I have a facet named 'subject' that conta
What query parser are you using? It looks like Lucene Query Parser or edismax.
The cause is that wildcard queries does not get analyzed. So even if
you have lowercase filters in the analysis chain that is not being
applied when you search using *
Thanks
Emmanuel
2012/2/29 Neil Hart :
> I'm just s
AFAIK join is done in the single core. Same core should have two types of
documents.
Pls let me know about your achievement.
On Wed, Feb 29, 2012 at 8:46 PM, federico.wachs
wrote:
> I'll give this a try. I'm not sure I completely understand how to do that
> because I don't have so much experience
I'm just starting out...
for either
testing QA
TESTING QA
I can query with the following strings and find my text:
testing
TESTING
testing*
but the following doesn't work.
TESTING*
any ideas?
thanks
Neil
Mark/Sami
I ran the system with 3 zookeeper nodes, 2 solr cloud nodes, and left
numShards set to its default value (i.e. 1)
I looks like it finally sync'd with the other one after quite a while, but
it's throwing lots of errors like the following:
org.apache.solr.common.SolrException: missing _v
Sami,
I have the latest as of the 26th. My system is running on a standalone
network so it's not easy to get code updates without a wave of paperwork.
I installed as per the detailed instructions I laid out a couple of
messages ago from today (2/29/2012).
I'm running the following query:
http:/
I had this problem sometime ago,
It happened on our homolog machine.
There was 3 solr instances , 1 master 2 slaves, running.
My Solution was: I stoped the slaves, deleted both data folders, runned an
optimize and than started it again.
I tried to raise the OS open file limit first, but i think i
Thanks Ahmet for your reply.
I don't think mm will help here because it defaults to 100% already by the
following code.
if (parsedUserQuery != null && doMinMatched) {
String minShouldMatch = solrParams.get(DMP.MM, "100%");
if (parsedUserQuery instanceof BooleanQuery) {
Thanks. They are set properly. But i misspelled the tomcat6 username in
limits.conf :(
On Wednesday 29 February 2012 18:08:55 Yonik Seeley wrote:
> On Wed, Feb 29, 2012 at 10:32 AM, Markus Jelsma
>
> wrote:
> > The Linux machines have proper settings for ulimit and friends, 32k open
> > files a
On Wed, Feb 29, 2012 at 7:03 PM, Matthew Parker
wrote:
> I also took out my requestHandler and used the standard /update/extract
> handler. Same result.
How did you install/start the system this time? The same way as
earlier? What kind of queries do you run?
Would it be possible for you to check
On Wednesday 29 February 2012 17:52:55 Sami Siren wrote:
> On Wed, Feb 29, 2012 at 5:53 PM, Markus Jelsma
>
> wrote:
> > Sami,
> >
> > As superuser:
> > $ lsof | wc -l
> >
> > But, just now, i also checked the system handler and it told me:
> > (error executing: ulimit -n)
>
> That's odd, you
On Wed, Feb 29, 2012 at 10:32 AM, Markus Jelsma
wrote:
> The Linux machines have proper settings for ulimit and friends, 32k open files
> allowed
Maybe you can expand on this point.
cat /proc/sys/fs/file-max
cat /proc/sys/fs/nr_open
Those take precedence over ulimit. Not sure if there are othe
I also took out my requestHandler and used the standard /update/extract
handler. Same result.
On Wed, Feb 29, 2012 at 11:47 AM, Matthew Parker <
mpar...@apogeeintegration.com> wrote:
> I tried running SOLR Cloud with the default number of shards (i.e. 1), and
> I get the same results.
>
> On Wed,
I tried running SOLR Cloud with the default number of shards (i.e. 1), and
I get the same results.
On Wed, Feb 29, 2012 at 10:46 AM, Matthew Parker <
mpar...@apogeeintegration.com> wrote:
> Mark,
>
> Nothing appears to be wrong in the logs. I wiped the indexes and imported
> 37 files from SharePo
On Wed, Feb 29, 2012 at 5:53 PM, Markus Jelsma
wrote:
> Sami,
>
> As superuser:
> $ lsof | wc -l
>
> But, just now, i also checked the system handler and it told me:
> (error executing: ulimit -n)
That's odd, you should see something like this there:
"openFileDescriptorCount":131,
"maxFi
Let's say that I have a facet named 'subject' that contains one of:
physics, chemistry, psychology, mathematics, etc
I'd like to do a search for the top 5 documents in each category. I
can do this with a separate search for each facet, but it seems like
there would a way to combine the search.
I'll give this a try. I'm not sure I completely understand how to do that
because I don't have so much experience with Solr. Do I have to use another
core to post a different kind of document and then join it?
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Is-there-a
Sami,
As superuser:
$ lsof | wc -l
But, just now, i also checked the system handler and it told me:
(error executing: ulimit -n)
This is rather strange, it seems. lsof | wc -l is not higher than 6k right now
and ulimit -n is 32k. Is lsof not to be trusted in this case or... something
else?
T
Mark,
Nothing appears to be wrong in the logs. I wiped the indexes and imported
37 files from SharePoint using Manifold. All 37 make it in, but SOLR still
has issues with the results being inconsistent.
Let me run my setup by you, and see whether that is the issue?
On one machine, I have three z
Hi Markus,
> The Linux machines have proper settings for ulimit and friends, 32k open files
> allowed so i suspect there's another limit which i am unaware of. I also
> listed the number of open files while the errors were coming in but it did not
> exceed 11k at any given time.
How did you check
rereading your email, perhaps this doesn't answer the question though.
Can you provide your solr.xml so we can get a better idea of your
configuration?
On Wed, Feb 29, 2012 at 10:41 AM, Jamie Johnson wrote:
> That is correct, the cloud does not currently elastically expand.
> Essentially when yo
That is correct, the cloud does not currently elastically expand.
Essentially when you first start up you define something like
numShards, once numShards is reached all else goes in as replicas. If
you manually specify the shards using the create core commands you can
define the layout however you
Hi,
We're doing some tests with the latest trunk revision on a cluster of five
high-end machines. There is one collection, five shards and one replica per
shard on some other node.
We're filling the index from a MapReduce job, 18 processes run concurrently.
This is plenty when indexing to a si
Hi,
At this point I'm ok with one zk instance being a point of failure, I just
want to create sharded solr instances, bring them into the cluster, and be
able to shut them down without bringing down the whole cluster.
According to the wiki page, I should be able to bring up new shard by using
sha
Hi,
No unfortunately I am not able to solve still,
For being sure I make it same field like in Solr Schema
I mean, for example for my "name" field I used name field in Solr
Schema or I did mine "name2" and copied same specifications with "name"
field in Solr. Or for my "coord" field I used Solr's s
Hi Sawmya,
Are you able to resolve your problem?
If not check the field type in the solr schema.It should be text if u r
tokenising and searching.
--
View this message in context:
http://lucene.472066.n3.nabble.com/indexing-but-not-able-to-search-tp3144695p3787592.html
Sent from the Solr - User
Our Solr started to throw the following exception when requesting the
facets of a multivalued field holding a lot of terms.
SEVERE: org.apache.solr.common.SolrException: Too many values for
UnInvertedField faceting on field topic
at
org.apache.solr.request.UnInvertedField.uninvert(UnInver
Created SOLR-3178 covering the versioning/optimistic-locking part. In
combination SOLR-3173 and SOLR-3178 should provide the features I am
missing, and that I believe lots of other SOLR users will be able to
benefit from. Please help shape by commenting on the Jira issues. Thanks.
Per Steffens
Hi,
I'm looking for a parameter like "group.truncate=true". Though I not
only want to count facets based on the most relevant document of each
group but based on all documents. Moreover if a facet value is in more
than in one document of a group it should only count once.
Example:
Doc 1:
type: s
You have to run ZK on a at least 3 different machines for fault
tolerance (a ZK ensemble).
http://wiki.apache.org/solr/SolrCloud#Example_C:_Two_shard_cluster_with_shard_replicas_and_zookeeper_ensemble
Ranjan Bagchi wrote:
Hi,
I'm interested in setting up a solr cluster where each machine [at l
Hold on. It's possible to find leases, at the particular date and collapse
them to appartments. But it looks impossible to negate those busy
apartments. Or I don't know how.
Let's try with http://wiki.apache.org/solr/Join
If you have lease documents with "FK" field LEASE_APT_FK *and* appartment
do
> 1. Search for 4X6 generated the following parsed query:
> +DisjunctionMaxQueryid:4 id:x id:6)^1.2) | ((name:4
> name:x
> name:6)^1.025) )
> while the search for "4 X 6" (with space in between)
> generated the query
> below: (I like this one)
> +((DisjunctionMaxQuery((id:4^1.2 | name:4^1.025)
43 matches
Mail list logo