Hello,
I use the facet.query to search documents nearby the search location. It
looks like this:
facet.query={!frange l=0 u=10}geodist()
facet.query={!frange l=0 u=20}geodist()
facet.query={!frange l=0 u=50}geodist()
facet.query={!frange l=0 u=100}geodist()
SFIELD and the PT are set, and it work
Hello,
Could check that every request doesn't trigger loading values into the
cache? You can see it in log. Recently I have similar issue when caching for
geodist() was disabled.
Regards
On Thu, Oct 13, 2011 at 11:31 AM, roySolr wrote:
> Hello,
>
> I use the facet.query to search documents nea
Hi,
I've just moved from using default handler on /select to a particular
requestHandler defined on solrconfig.xml (the main goal is to add default
parameters, thus simplifying requests).
My search handler looks like this:
title,author,link,description,timestamp,url,image
Hello Mikhail,
Thanks for your answer..
I think my cache is enabled for Geodist(). First time request takes 1440ms
and second time only 2ms. In the statistics i see it's hits the cache. The
problem is every request had another location with other distances and
results. So almost every request tak
As expected, I've found the answer five minutes after posting ...
explicit
Anyway, is there somewhere in the wiki where all the available params of the
SearchHandler are explained?
Thanks!
--
View this message in context:
http://lucene.472066.n3.nabble.com/params-field-on-response-tp3418259p34
Something's not right here, I'll ask over on the
dev list and we'll see what the reply is, you can
monitor over there too if you'd like.
Erick
Erick
On Wed, Oct 12, 2011 at 12:10 PM, Marc Tinnemeyer wrote:
> At first I thought so too. Here is a simple document.
>
>
>
> 1
Ok. I've found explicit caveat for you
http://wiki.apache.org/solr/SpatialSearch#How_to_facet_by_distance
I don't think I'm able to help.
Just for curious why geofilt is not enough for you?
My concern is that the functions and queries score documents, but for facet
queries the only filtering is re
After looking at this a little more, it feels like
a caching issue. If I bounce the Solr server
between queries, it works just fine (I'm
on trunk now, but I think it's the same on 3.4).
Could you try that experiment a bit? Bounce
your Solr server between changing the field
and see if you get the c
Thanks Erick
In the meantime I checked against 3.3 and 3.1 but no difference there.
Marc
On 13.10.2011, at 14:00, Erick Erickson wrote:
> Something's not right here, I'll ask over on the
> dev list and we'll see what the reply is, you can
> monitor over there too if you'd like.
>
> Erick
>
> E
I don't want to use some basic facets. When the user doesn't get any results
i want
to search in the radius of his search location. Example:
apple store in Manchester gives no result. I want this:
Click here to see 2 results in a radius of 10km.
Click here to see 11 results in a radius of 50km.
C
I would consider "shadow" cores for this use case. Say you have coreA live, and
a coreB which is not live.
Then, upgrade schema in coreB, clear the index, feed all content, test that it
works ok, and finally, do a CORE.SWAP
/solr/admin/cores?action=SWAP&core=coreA&other=coreB - See
http://wiki
There's nothing in Solr that'll do this for you that I
know of. The copyfield solution is probably your
best option.
The idea is that you have two field definitions
that use two different field types, one for each
flavor or query analyzer. Then, you can use
to copy the field in question into
the
Perhaps integrate this using a javascript or other application front end to
query solr, get the key to the database, and then run off to get the data?
-Original Message-
From: Ikhsvaku S [mailto:ikhsv...@gmail.com]
Sent: Tuesday, October 11, 2011 2:47 PM
To: solr-user@lucene.apache.org
We have used a VMWare VM for our index for testing for our index (currently
around 3GB) and it has been just fine - at most maybe a 10 to 20% penalty, if
that, even when CPU bound. We also plan to use a VM for production.
What hypervisor one uses matters - sometimes a lot.
-Original Messag
Penela,
Yes, there are several Wiki pages with info about params for querying,
faceting, highlighting, etc.
Have a look: http://search-lucene.com/?q=parameters&fc_project=Solr&fc_type=wiki
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://sea
Deniz,
I don't think anyone will be able to help you without more details, except for
saying that the exception below is from Tomcat, not Solr.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
>
Hi,
I have some questions about the 4.0 solr cloud implementation.
1. I want to have a large cloud of machines on a network. each machine
will
process data and write to its "local" solr server (node,shard or
whatever).
This is necessary because it won't be possible to have 100 machines with
100
I raised a JIRA, see:
https://issues.apache.org/jira/browse/SOLR-2829
Erick
On Thu, Oct 13, 2011 at 8:20 AM, Marc Tinnemeyer wrote:
> Thanks Erick
> In the meantime I checked against 3.3 and 3.1 but no difference there.
>
> Marc
>
> On 13.10.2011, at 14:00, Erick Erickson wrote:
>
>> Something's
is it possible with geofilt and facet.query?
facet.query={!geofilt pt=45.15,-93.85 sfield=store d=5}
On Thu, Oct 13, 2011 at 4:20 PM, roySolr wrote:
> I don't want to use some basic facets. When the user doesn't get any
> results
> i want
> to search in the radius of his search location. Exampl
We have the identical problem in our system.
Our plan is to encode the most recent version of a document using an
explicit field/value;
ie
version=current
(or maybe current=true)
We also need to be able to allow users to search for the most current,
but only within versions they have access
I looked at the copyfield solution and found it not suitable for what I am
trying for. I defined a new using a that uses a synonym
filter for the query analyzer. Then I used a command to fill it
with the data that I want. Since I do not want to create another index I set
the index parameter to f
On Thu, Oct 13, 2011 at 9:55 AM, Mikhail Khludnev
wrote:
> is it possible with geofilt and facet.query?
>
> facet.query={!geofilt pt=45.15,-93.85 sfield=store d=5}
Yes, that should be both possible and faster... something along the lines of:
&sfield=store&pt=45.15,-93.85
&facet.query={!geofilt d=
Is there some reason you don't want to leverage Highlighter to do this
work? It has all the necessary code for using the analyzed version of
your query so it will only match tokens that really contribute to the
search match.
You might also be interested in LUCENE-2878 (which is still under
d
Hello folks,
my wildcard-search shows strange behavior.
Sometimes i have results, sometimes not.
I use the last nightly build(Solr 4.0, Build #1643)
I use this filters and tokenizers to "index":
WhitespaceTokenizer
WoldDelimiterFilter
LowerCaseFilter
RemoveDuplicateTokenFilter
ReversedWildcardFil
Why are you reluctant to set ' indexed="true" ' for that field? It
doesn't create another index, it's just another field in your current
index. You can surely set ' stored="false" ' in order not to keep
a copy of the raw data though...
Indexing fields twice is a common option when treating those
f
Wildcard queries are NOT analyzed. So the fact that your
queries that are identical except for case produce different
result sets is expected behavior. I believe there's a JIRA to
allow limited analysis of wildcard queries, but I confess I
don't know what the status of it is.
You'll have to do wha
Or, alternatively, it would be nice to link a field to another field so that
it can use the index of that field.
The whole point of different "query analyzers" on the same index would make
the whole solr/lucene more flexible I think. But let's wait and see, maybe
it is possible to do this and I am
Hi Erick,
thanks for your quick response.
I've analyzed it and i've already thought the same.
This are the JIRA-Issues:
https://issues.apache.org/jira/browse/SOLR-219
https://issues.apache.org/jira/browse/SOLR-2438
Both are still open.
I think i wait 1-2 months. Then i write a custom component:)
Hi *,
i am a bit confused about what is the best way to achieve my requirements.
We have a mail ticket system. A ticket is created when a mail is received by
the system:
doc 1:
uid: 1001_in
ticketid: 1001
type: in
body: I have a problem
category: bugfixes
date: 201110131955
This incoming docum
Sorry Erick, my last post and your's crossed each other.
I am reluctant to use another index (or a multi-value index) since I think
it will increase the storage I need for those indexes without adding
functionality (and storage could be an issue for me).
But first let's see if I understand you co
Our Solr implementation includes a third-party filter that adds additional,
multiple term types to the token list (beyond "word", etc.). Most of the time
this is exactly what we want, but we felt we could improve our search results
by having different tokens on the index and query side. Since
On 10/11/2011 11:49 AM, Toke Eskildsen wrote:
Inline or top-posting? Long discussion, but for mailing lists I
clearly prefer the former.
Ditto. ;)
I have little experience with VM servers for search. Although we use a
lot of virtual machines, we use dedicated machines for our searchers,
prim
On Thu, Oct 13, 2011 at 1:37 PM, wrote:
>
> Hi,
> I have some questions about the 4.0 solr cloud implementation.
>
> 1. I want to have a large cloud of machines on a network. each machine
> will process data and write to its "local" solr server (node,shard or
> whatever). This is necessary becau
Hi,
I'm trying to index a recipe database which contains , amongst other things, a
list of ingredients stored as free text in a single mysql
column.
e.g
2 apple bananas (or ordinary bananas), peeled
50g/2oz soft brown sugar
100g/3½oz self-raising flour
1 free-range egg, beaten
3-4 tbsp sparkli
I'm following the instructions here:
http://wiki.apache.org/solr/SolrTomcat#Installing_Solr_instances_under_Tomcat
...under the heading "Multiple Solr Webapps".
I have configured the context fragment as instructed, placed the
apache-solr-3.4.0.war in the directory pointed to by the docBase
va
Mark,
Would using Solr's support for dynamic fields work for you here? (assuming you
can parse the value of that DB column and extract individual ingredients before
indexing)
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
Monica,
This is different from Solr's synonyms filter with different synonyms files,
one for index-time and the other for query-time expansion (not sure when you'd
want that, but it looks like you need this and like this), right? If so, maybe
you can describe what your filter does differently
Thanks, Otis - yes, this is different from the synonyms filter, which we also
use. For example, if you wanted all tokens that were marked 'lemma' to be
removed, you could specify that, and all tokens with any type other than
'lemma' would still be returned. You could also choose to remove all
Great! Thank you. I'm eager to test it on EC2 whenever its near beta ready.
On 10/13/2011 11:51 AM, Ted Dunning wrote:
On Thu, Oct 13, 2011 at 1:37 PM, wrote:
Hi,
I have some questions about the 4.0 solr cloud implementation.
1. I want to have a large cloud of machines on a network. each m
Yes, it will make your existing index larger, but it's, as far as I know,
the only way to effect this. You've outlined the process well.
But this may not be that valuable in the general case. Very frequently,
if people use different analyzers at index and query time, the get into
deep trouble. And
I am new to solr and not a web developer. I am a data warehouse guy trying to
use solr for the first time. I am familiar with xsl but I can't figure out how
to get the example.xsl to be applied to my xml results. I am running tomcat
and have solr working. I copied over the solr mulitiple cor
Hello!
I've got a problem and maybe someone had a similar one ;) I want to
'force' dismax to make a query like the following one:
+title:foo^100 desc:foo
The name and desc fields are only an exmaple, there can be a multiple
fields that lay under the name of 'title' and 'desc'.
What I try to achi
http://www.lucidimagination.com/search/document/1f33fb9f68edee4c/date_range_query_where_doc_has_more_than_one_date_field
+launchDate:[* TO NOW/DAY] +expireDate:[NOW/DAY TO *]
: I am using solr 3.3 and solrJ. I have two date fields launch_date and
: expiry_date. Now i want to be able to do a sea
: I've got a problem and maybe someone had a similar one ;) I want to
: 'force' dismax to make a query like the following one:
: +title:foo^100 desc:foo
:
: The name and desc fields are only an exmaple, there can be a multiple
: fields that lay under the name of 'title' and 'desc'.
:
: What I tr
One thing to consider is the case where the JVM is up, but the system is
otherwise unavailable (say, a NIC failure, firewall failure, load balancer
failure) - especially if you use a SAN (whose connection is different from the
normal network).
In such a case the old master might have uncommitte
Yes that is a good point. Thanks.
I think I will avoid using NAS/SAN and use two masters, one setup as a repeater
(slave and master). In case of very rare master failure, some minor manual
intervention will be required to re-configure remaining master or bring other
one back up.
My only conc
Hello!
> : I've got a problem and maybe someone had a similar one ;) I want to
> : 'force' dismax to make a query like the following one:
> : +title:foo^100 desc:foo
> :
> : The name and desc fields are only an exmaple, there can be a multiple
> : fields that lay under the name of 'title' and 'de
http://wiki.apache.org/solr/XsltResponseWriter
This is for the single-core example. It is easiest to just go to
solr/example, run java -jar start.jar, and hit the URL in the above wiki
page. Then poke around in solr/example/solr/conf/xslt. There is no
solrconfig.xml change needed.
It is generally
current: bool //for fq which searches only current versions
last_current_at: date time // for date range queries or group sorting
what was current for a given date
sorry if i've missed a requirement
lee c
On 13 October 2011 15:01, Mike Sokolov wrote:
> We have the identical problem in our syste
sorry missed the permission stuff:
I think thats ok if you index the acl as part of the document. That is
to say each version has its own acl. Match users against version acl
data
as a filter query and use last_current_at date as a sort
On 13 October 2011 22:04, lee carroll wrote:
> current: b
Thanks Hoss,
Yes, +name:foo^10 desc:foo should do it.
Can one configure (e)dismax to add tat + to 1 or more fields, like name in that
example, in order to require that clause though?
Thanks,
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://s
: > Deja-Vu...
:
: >
http://www.lucidimagination.com/search/document/3551f130b6772799/excluding_docs_from_results_based_on_matched_field
:
: > -Hoss
:
: Thanks for the answer, the problem is that the query like this:
:
: q=foo&defType=dismax&qf=title&bq={!dismax qf='title desc' v=$q}
:
: cau
Hi Otis,
I know it is coming from Tomcat and was curious if anyone had the same
problem before... and as for details it is the only thing i got in the logs
as an error... i can put more details if you tell me what exactly you want
to see... i am confused and dunno what else i can put other than er
I have the luxury of JMS in my environment, so that may be a simple way to
solve this...
Sent from my iPhone
On Oct 13, 2011, at 4:02 PM, "Robert Stewart" wrote:
> Yes that is a good point. Thanks.
>
> I think I will avoid using NAS/SAN and use two masters, one setup as a
> repeater (slave
We recently updated our Solr and Solr indexing from DIH using Solr 1.4 to our
own Hadoop import using SolrJ and Solr 3.4.
While everything seems to be working, we seem to have one stumper of a
problem.
Any document that has a string field value with a carriage return "\r" is
having that carr
Hello,
I'm working on solr 1.4 with around 10 millions documents. Usually, it's
fine. However, the issue arises when I add new field to the schema.xml, I
need to reindex the whole database for that new field. Indexing the whole
database with the whole properties takes so long to do. It would be be
Hi Chhorn,
There is currently no better way - you need to update/re-add the whole document.
But Solr 1.4 is rather old. If you get Solr 3.4 your indexing speed will go up
noticeably!
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-luc
Thanks for the response but I have seen this page and I had a few
questions.
1. Since I am using tomcat, I had to move the example directory into the
tomcat directory structure. In the multicore, there is no example.xsl.
Where do I
need to put it? Also, how do I send docs for indexing when ru
58 matches
Mail list logo