-filterMinutes:[* TO *] should return documents that do not have a value
assigned to that field.
On Jan 3, 2012, at 11:30 PM, Allistair Crossley wrote:
> Evening all,
>
> A subset of my documents have a field, filterMinutes, that some other
> documents do not. filterMinutes stores a number.
>
Evening all,
A subset of my documents have a field, filterMinutes, that some other documents
do not. filterMinutes stores a number.
I often issue a query that contains a filter query range, e.g.
q=filterMinutes:[* TO 50]
I am finding that adding this query excludes all documents that do not fe
Thans Chris for clarifying. This helps a lot.
On Wed, Jan 4, 2012 at 2:07 AM, Chris Hostetter-3 [via Lucene] <
ml-node+s472066n3630181...@n3.nabble.com> wrote:
> : If your log level is set at least to INFO, as it should be by default
> Solr does
> : log response time to a different file. E.g., I
Jul,
Maybe you missed "Example of content :" and "My charfilter should clean it like
:"
in your previous mail? We need them in order to consider your problem. :->
koji
--
http://www.rondhuit.com/en/
(12/01/04 2:19), darul wrote:
Hello,
I wanted to use char filter PatternReplaceCharFilterFac
Address the points I brought up or don't reply with funny name calling.
Below are two key points reiterated and re-articulated is an easy to answer way:
* Multi-select faceting is per-segment (true or false)
* Filters are cached per-segment (true or false)
On Tue, Jan 3, 2012 at 2:16 PM, Yonik
On Tue, Jan 3, 2012 at 5:03 PM, Jason Rutherglen
wrote:
> Yikes. I'd love to see a test showing that un-inverted field cache
> (which is for ALL segments as a single unit) can be used efficiently
> with NRT / soft commit.
Please stop being a troll.
Solr as multiple faceting methods - only one us
The main point is, Solr unlike for example Elastic Search and other
Lucene based systems does NOT cache filters or facets per-segment.
This is a fundamental design flaw.
On Tue, Jan 3, 2012 at 1:50 PM, Yonik Seeley wrote:
> On Tue, Jan 3, 2012 at 4:36 PM, Erik Hatcher wrote:
>> As I understand
> multi-select faceting
Yikes. I'd love to see a test showing that un-inverted field cache
(which is for ALL segments as a single unit) can be used efficiently
with NRT / soft commit.
On Tue, Jan 3, 2012 at 1:50 PM, Yonik Seeley wrote:
> On Tue, Jan 3, 2012 at 4:36 PM, Erik Hatcher wrote:
>> A
On Tue, Jan 3, 2012 at 4:36 PM, Erik Hatcher wrote:
> As I understand it, the document and filter caches add value *intra* request
> such that it keeps additional work (like fetching stored fields from disk
> more than once) from occurring.
Yep. Highlighting, multi-select faceting, and distrib
As I understand it, the document and filter caches add value *intra* request
such that it keeps additional work (like fetching stored fields from disk more
than once) from occurring.
Erik
On Jan 3, 2012, at 16:26 , Jason Rutherglen wrote:
> *Laugh*
>
> I stand by what Mark said:
>
>
Hello Mikhail
Thank you for the fast reply, please find my answers inline.
On Tue, Jan 3, 2012 at 11:00 PM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Hello,
>
> Please find my thoughts below.
>
> On Wed, Jan 4, 2012 at 12:39 AM, Maxim Veksler wrote:
> >
> > Hello,
> >
> > I've sta
*Laugh*
I stand by what Mark said:
"Right - in most NRT cases (very frequent soft commits), the cache should
probably be disabled."
On Mon, Jan 2, 2012 at 7:45 PM, Yonik Seeley wrote:
> On Mon, Jan 2, 2012 at 9:58 PM, Jason Rutherglen
> wrote:
>>> It still normally makes sense to have the cach
Hello,
Please find my thoughts below.
On Wed, Jan 4, 2012 at 12:39 AM, Maxim Veksler wrote:
>
> Hello,
>
> I've started to evaluate Solr and so far haven't seen anything mentions for
> support of compound indexes.
If I get you right, it doesn't. AFAIK It combines separate indexes
basing on the
Hello,
I've started to evaluate Solr and so far haven't seen anything mentions for
support of compound indexes.
I'm looking to either radius or share based geospatial proximity queries
(find all document that are 20km from given lat,lng)
I would also at times be doing geo queries bonded with anot
Hi Suneel,
I have implemented Solr sharding in one of my projects where data was of
the order of 1 billion documents and my queries were throwing Out of
memory exception because of huge index. Here are my views:
- Have identical Solr server setups for each shard with same schema.
1. We need to c
: If your log level is set at least to INFO, as it should be by default Solr
does
: log response time to a different file. E.g., I have
: INFO: [] webapp=/solr path=/select/
: params={indent=on&start=0&q=*:*&version=2.2&rows=10} hits=22 status=0
: QTime=40
: where the QTime is 40ms, as also reflec
: Ok. Let me try with plain java one. Possibly I'll need more tight
: integration like injecting a core into the singleton, etc. But I don't know
: yet.
yeah ... it really depends on what you mean by "singleton" ...
...single instance in entire JVM?
...single instance in each web
I am using solr My index becomes too large I want to implement shards concept
but i have some doubt. i searched a lot but not found satisfied result.
1. We need to create handler for shards in solrconfig.xml ?
2. Index will be different for each shards instance means we need to break
data in part
: About bumping MaxBooleanQueries. You can certainly
: bump it up, but it's a legitimate question whether the
: user is well served by allowing that pattern as opposed
: to requiring 2 or 3 leading characters. The assumption
i think the root of the issue here is that when executing queries, reall
I've got another question for anyone that might have some insight - how do
you get all of your indexed information along with the suggestions? i.e. if
each suggestion has an ID# associated with it, do I have to then query for
that ID#, or is there some way or specifying a field list in the URL to t
Hi List
I have a Solr cluster set up in a master/slave configuration where the
master acts as an indexing node and the slaves serve user requests.
To avoid accidental posts of new documents to the slaves, I have disabled
the update handlers.
However, I use an externalFileField. When the file is
Hello,
I wanted to use char filter PatternReplaceCharFilterFactory to avoid
specific content to be indexed.
At the end I get many issues with highlights and offsets...so I remove it,
example :
Example of content :
My charfilter should clean it like :
I do not understand why offset of hi
I'm also very interested in this - for my regex augmenter. If we could get an
augmenter to add highlighting results directly to the doc, like the explain
augmenter does, then I could definitely write up that regex augmenter..
http://lucene.472066.n3.nabble.com/Regex-DocTransformer-td3627314.html
Hi Jan,
Yes, I just saw the answer. I've implemented that, and it's working as
expected. I do have Suggest running on its own core, separate from my
standard search handler. I think, however, that the custom QueryConverter
that was linked to is now too restrictive. For example, it works perfectly
I will. Thanks.
> Hi Darren,
>
> Would you please tell us all the parameters that you are sending in the
> request? You can use the parameter "echoParams=all" to get the list in the
> output.
>
> Thanks,
>
> *Juan*
>
>
>
> On Mon, Jan 2, 2012 at 8:37 PM, Darren Govoni wrote:
>
>> Forgot to add, t
Hi,
i am taking snapshots of my master index after optimize calls (run each
day once), to get a clean backup of the index.
Is there a parameter to tell the replication handler how many snapshots
to keep and the rest should be deleted? Or must i use a custom script
via cron?
regards
Torsten
smi
Hi,
As you see, you've got an answer at StackOverflow already with a proposed
solution to implement your own QueryConverter.
Another way is to create a Solr core solely for Suggest, and tune it exactly
the way you like. Then you can have it suggest from the whole input as well as
individual to
Hi Darren,
Would you please tell us all the parameters that you are sending in the
request? You can use the parameter "echoParams=all" to get the list in the
output.
Thanks,
*Juan*
On Mon, Jan 2, 2012 at 8:37 PM, Darren Govoni wrote:
> Forgot to add, that the time when I DO want the highlig
hi
softcommit work with below command but don`t work in solrconfig.xml. What is
wrong with below xml part?
curl http://localhost:8984/solr/update -H "Content-Type: text/xml"
--data-binary ''
1000
--
View this message in context:
http://lucene.472066.n3.nabble.com/so
A jira-ticket has been issued, this discussion here is closed.
https://issues.apache.org/jira/browse/SOLR-2999
Oliver
--
View this message in context:
http://lucene.472066.n3.nabble.com/spellcheck-index-is-rebuilt-on-commit-tp3626492p3628894.html
Sent from the Solr - User mailing list archive at
Thanks for the clear explanation. I'll open a ticket as soon as jira is up
running again.
Oliver
--
View this message in context:
http://lucene.472066.n3.nabble.com/spellcheck-index-is-rebuilt-on-commit-tp3626492p3628603.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Tue, Jan 3, 2012 at 9:12 AM, OliverS wrote:
> Hi all
>
> Thanks a lot, and it seems to be a bug, but not of 4.0 only. You are right,
> I was doing a commit on an optimized index without adding any new docs (in
> fact, I did this for replication on the master). I will open a ticket as
> soon as
Hi all
Thanks a lot, and it seems to be a bug, but not of 4.0 only. You are right,
I was doing a commit on an optimized index without adding any new docs (in
fact, I did this for replication on the master). I will open a ticket as
soon as I fully understand what's going on. I have difficulties
und
33 matches
Mail list logo