Hi Joel,
I just tested with custom equals and hashcode ... what I basically did
is that I created a string object based on all the function values and
used this for the equals (with an instanceof) and for the hash method.
The result was quite the same as before, all treszults are cashed
unless I
Long blog post on commits and the state of updates here:
http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
hdfs is perfectly fine with Solr, there's even an HdfsDirectoryFactory for
your index. It has its own
performance characteristics/tuning param
bq: Do i understand you correctly that when two segmets get merged, the
docids
(of the original segments) remain the same?
The original segments are unchanged, segments are _never_ changed after
they're closed. But they'll be thrown away. Say you have segment1 and
segment2 that get merged into seg
We should probably talk about "internal" Lucene document IDs and "external"
or "rebased" Lucene document IDs. The internal document IDs are always
"per-segment" and never, ever change for that closed segment. But... the
application would not normally see these IDs. Usually the externally visible
Hi everyone,
I am wondering how commit operation works in SolrCloud:
Say I have 2 parallel indexing processes. What if one process sends big
update request (an add command with a lot of docs), and the other one just
happens to send a commit command while the update request is being
processed.
Is
SolrCloud does not use commits for update acceptance promises.
The idea is, if you get a success from the update, it’s in the system, commit
or not.
Soft Commits are used for visibility only.
Standard Hard Commits are used essentially for internal purposes and should be
done via auto commit ge
I have a multivalued field and they have payloads. How can I use that
payloads at boosting? (When user searches for a keyword and if a match
happens at that multivalued field its payload will be added it to the
general score)
PS: I use Solr 4.5.1 as Cloud.
I suggest you to read here:
http://searchhub.org/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
Thanks;
Furkan KAMACI
2013/11/24 Mark Miller
> SolrCloud does not use commits for update acceptance promises.
>
> The idea is, if you get a success from the update, it
Morning Doug,
it sounds like you can encode norm as the number of term positions in the
title (assuming it's single value).
When you search, SpanQuery can access particular positions of the matched
terms, and then compare them to the number of terms decoded from the norm.
It's sounds more like hac
24 November 2013, Apache Solr™ 4.6 available
The Lucene PMC is pleased to announce the release of Apache Solr 4.6
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted se
This should work. Try adding &debug=all to your URL, and examine
the output both with and without your boosting. I believe you'll see
the difference in the score calculations. From there it's a matter
of adjusting the boosts to get the results you want.
Best,
Erick
On Sat, Nov 23, 2013 at 9:17
bq: For example, what if the leader could give a list of
queries/filters currently in the cache which could then be executed on
the replica?
How is the better than each replica firing off its own warming
queries for its caches etc? Each replica may well fire different
autowarm queries since there'
Yes, you should use a recent Java 7. Java 6 is end-of-life and no longer
supported by Oracle. Also, read up on the various garbage collectors. It
is a complex topic and there are many guides online.
In particular there is a problem in some Java 6 releases that causes a
massive memory leak in S
Hi,
In http://search-lucene.com/m/O1O2r14sU811 Shalin wrote:
"The splitting process is nothing but the creation of a bitset with
which a LiveDocsReader is created. These readers are then added to the
a new index via IW.addIndexes(IndexReader[] readers) method."
... which makes me wonder coul
Ok Erick.. I will try thanks
On 25-Nov-2013 2:46 AM, "Erick Erickson" wrote:
> This should work. Try adding &debug=all to your URL, and examine
> the output both with and without your boosting. I believe you'll see
> the difference in the score calculations. From there it's a matter
> of adjustin
Roman,
I don't fully understand your question. After segment is flushed it's never
changed, hence segment-local docids are always the same. Due to merge
segment can gone, its' docs become new ones in another segment. This is
true for 'global' (Solr-style) docnums, which can flip after merge is
ha
Hi Mark, Thanks for the answer.
One more question though: You say that if I get a success from the update,
it’s in the system, commit or not. But when exactly do I get this feedback -
Is it one feedback per the whole request, or per one add inside the request?
I will give an example clarify my que
If you want this promise and complete control, you pretty much need to do a doc
per request and many parallel requests for speed.
The bulk and streaming methods of adding documents do not have a good fine
grained error reporting strategy yet. It’s okay for certain use cases and and
especially b
Just to clarify how these two phrases come together:
1. "you will know when an update is rejected - it just might not be easy to
know which in the batch / stream"
2. "Documents that come in batches are added as they come / are processed -
not in some atomic unit."
If I send a batch of documents
On Nov 25, 2013, at 1:40 AM, adfel70 wrote:
> Just to clarify how these two phrases come together:
> 1. "you will know when an update is rejected - it just might not be easy to
> know which in the batch / stream"
>
> 2. "Documents that come in batches are added as they come / are processed -
>
20 matches
Mail list logo