Hey,
I have a field defined as such:
with the string type defined as:
When I try using some query-time boost parameters using the bq on values of
this field it seems to behave
strangely in case of documents actually having multiple values:
If i'd do a boost for a particular value ( "site_id
Hey Ken,
Thanks for your reply.
When I wrote '5|6' I ment that this is a multiValued field with two values
'5' and '6', rather than the literal string '5|6' (and any Tokenizer). Does
your reply still holds? That is, are multiValued fields dependent on the
notion of tokenization to such a degree so
Hey all,
I was wondering - when running Solr in a master/slaves setup using the 1.3
snap* scripts,
does the slaves' solrconfig.xml mergeFactor value make any difference? as
far as I would assume,
the mergeFactor specified in the master solrconfig.xml dictates the format
of the index and then the s
Hey,
I noticed in recent SVN versions the example/solr/bin dir has been empty.
I understand the various snappulling scripts are basically deprecated since
replication is now handled in-process, however I was wondering what is the
state of the
optimize script, i.e how do I control optimization myse
eeley-2 wrote:
>
> A script isn't really needed for something as simple as a commit:
>
> curl 'http://localhost:8983/solr/update?commit=true'
>
> -Yonik
> http://www.lucidimagination.com
>
> On Wed, Aug 12, 2009 at 2:27 PM, KaktuChakarabati
> wrote:
t with optimize?
>
> curl 'http://localhost:8983/solr/update?optimize=true'
>
> -Yonik
> http://www.lucidimagination.com
>
>
>
> On Wed, Aug 12, 2009 at 2:43 PM, KaktuChakarabati
> wrote:
>>
>> Hey Yonik,
>> Thanks for the quick reply, However m
Hello,
I've recently switched over to solr1.4 (recent nightly build) and have been
using the new replication.
Some questions come to mind:
In the old replication, I could snappull with multiple slaves asynchronously
but perform the snapinstall on each at the same time (+- epsilon seconds),
so tha
or am i missing something with current in-process
replication?
Thanks,
-Chak
Shalin Shekhar Mangar wrote:
>
> On Fri, Aug 14, 2009 at 8:39 AM, KaktuChakarabati
> wrote:
>
>>
>> In the old replication, I could snappull with multiple slaves
>> asynchronously
नोब्ळ्-2 wrote:
>
> usually the pollInterval is kept to a small value like 10secs. there
> is no harm in polling more frequently. This can ensure that the
> replication happens at almost same time
>
>
>
>
> On Fri, Aug 14, 2009 at 1:58 PM, KaktuChakarabati
> wro
Hey,
I was wondering if there is any equivalent in new in-process replication to
what
could previously be achieved by running snapcleaner -N which would
essentially
allow me to keep backups of N latest indices pulls on a search node.
This is of course very important for failover operation in produ
Hey,
I was wondering - is there a mechanism in lucene and/or solr to mark a
document in the index
as deleted and then have this change reflect in query serving without
performing the whole
commit/warmup cycle? this seems to me largely appealing as it allows a kind
of solution
where deletes are sim
a fairly simple change though perhaps too late for 1.4
> release?
>
> On Tue, Aug 25, 2009 at 3:10 PM, KaktuChakarabati
> wrote:
>>
>> Hey,
>> I was wondering - is there a mechanism in lucene and/or solr to mark a
>> document in the index
>> as deleted and then h
as per another
> discussion IW isn't public yet, so all you'd need to do is
> expose it from UpdateHandler. Then it should work as you want,
> though there would need to be a new method to create a new
> searcher from IW.getReader without calling IW.commit.
>
> On Tue,
Hey,
I noticed with new in-process replication, it is not as straightforward to
have
(production serving) solr index snapshots for backup (it used to be a
natural byproduct
of the snapshot taking process.)
I understand there are some command-line utilities for this (abc..)
Can someone please expla
ith the new (Java) replication tools. I don't know
> as much about them, though.
>
> 2009/9/29 KaktuChakarabati :
>>
>> Hey,
>> I noticed with new in-process replication, it is not as straightforward
>> to
>> have
>> (production serving) solr index snap
Hey,
I've been trying to find some documentation on using this feature in 1.4 but
Wiki page is alittle sparse..
In specific, here's what i'm trying to do:
I have a field, say 'duplicate_group_id' that i'll populate based on some
offline documents deduplication process I have.
All I want is for s
t;
> Otis
> --
> Sematext is hiring -- http://sematext.com/about/jobs.html?mls
> Lucene, Solr, Nutch, Katta, Hadoop, HBase, UIMA, NLP, NER, IR
>
>
>
> - Original Message
>> From: KaktuChakarabati
>> To: solr-user@lucene.apache.org
>>
Hello everyone,
I am trying to write a new QParserPlugin+QParser, one that will work similar
to how DisMax does, but will give me more control over the
FunctionQuery-related part of the query processing (e.g in regards to a
specified bf parameter).
In specific, I want to be able to affect the way
Hey,
I am trying to understand what kind of calculation I should do in order to
come up with reasonable RAM size for a given solr machine.
Suppose the index size is at 16GB.
The Max heap allocated to JVM is about 12GB.
The machine I'm trying now has 24GB.
When the machine is running for a while
On Tue, Mar 16, 2010 at 9:08 PM, KaktuChakarabati
> wrote:
>
>>
>> Hey,
>> I am trying to understand what kind of calculation I should do in order
>> to
>> come up with reasonable RAM size for a given solr machine.
>>
>> Suppose the index size is a
20 matches
Mail list logo