Thanks Shawn and Erick.
This is what I also ended up finding, as the number of buckets increased, I
noticed the issue.
Zheng: I am using Solr7. But this was only an experiment on the hash, i.e.,
what distribution should I expect from it. (as the above gist shows). I
didn't actually index into sol
What Shawn said. 117 shards and 116 docs tells you absolutely nothing
useful. I've never seen the number of docs on various shards be off by
more than 2-3% when enough docs are indexed to be statistically valid.
Best,
Erick
On Fri, Mar 16, 2018 at 5:34 AM, Shawn Heisey wrote:
> On 3/6/2018 11:53
On 3/6/2018 11:53 AM, Nawab Zada Asad Iqbal wrote:
I have 117 shards and i tried to use document ids from zero to 116. I find
that the distribution is very uneven, e.g., the largest bucket receives
total 5 documents; and around 38 shards will be empty. Is it expected?
With such a small data se
Hi,
What version of Solr are you running? How did you configure your shards in
Solr?
Regards,
Edwin
On 7 March 2018 at 02:53, Nawab Zada Asad Iqbal wrote:
> Hi solr community:
>
>
> I have been thinking to use composite key for my next project iteration and
> tried it toda
Hi solr community:
I have been thinking to use composite key for my next project iteration and
tried it today to see how it distributes the documents.
Here is a gist of my code:
https://gist.github.com/niqbal/3e293e2bcb800d6912a250d914c9d478
I have 117 shards and i tried to use document ids
instead of adding.
--
- Mark
http://www.lucidimagination.com
On Fri, Jul 24, 2009 at 8:43 AM, Erik Hatcher wrote:
>
> On Jul 24, 2009, at 8:33 AM, Nishant Chandra wrote:
>
>> Can I use composite key for uniqueKeyId? If yes, how?
>>
>
> No - you get one field to use fo
On Jul 24, 2009, at 8:33 AM, Nishant Chandra wrote:
Can I use composite key for uniqueKeyId? If yes, how?
No - you get one field to use for uniqueKey in Solr. It is your
indexer's responsibility for aggregating values from your data sources
into a single uniqueKey value. For exampl
Can I use composite key for uniqueKeyId? If yes, how?
Thanks,
Nishant
On Fri, 7 Mar 2008 17:59:48 -0800 (PST)
Chris Hostetter <[EMAIL PROTECTED]> wrote:
> I believe Norberto ment he was handling it in his update client code --
> before sending the docs to Solr.
Indeed, this what we do. We have a process that parses certain files, generates
documents following the
I used it to index my DB and I found no bugs .Ours is a very simple usecase.
There were rough edges though. The error logging and messages were not up to
the mark. It aborted the entire indexing when there was a missing
'required' field. It must just skip that document. Or give me an opotion to
c
hi ,
The tool is undergoing substantial testing in our QA department .
Because it is an official internal project also, the bugs are filed in
our bug tool. We are fixing them as and when they are reported. It has
gone through some good iterations and it is going to power the backend
for a 2 of our
The best thing folks can do to help with getting patches like this
important DataImporterHandler committed to trunk is to try it out,
report back experiences, and offer suggestions for improvement.
Solr 1.3 will come in _good_ time, but not before its time. There
are many substantial chang
I am also looking forward to get this checked into the trunk.
Will there be a patch with Solr1.2 support?
Cheers
Vijay
On Sat, Mar 8, 2008 at 10:11 AM, Jon Baer <[EMAIL PROTECTED]> wrote:
> That definitely sounds like the proper way to go + will try. Im not
> too concerned w/ my keys coming bac
Good to hear that people are using DatImportHandler
In a couple of days, we are giving another patch which is cleared by
our QA with
better error handling, messaging and a lot of new features.
A committer will have to decide on when it is good enough to be committed
--Noble
On Sat, Mar 8, 2008 a
That definitely sounds like the proper way to go + will try. Im not
too concerned w/ my keys coming back just that I can't seem to run the
DataImportHandler w/o one.
I was able to temporarily get around it by returning it in the entity
query. Ie:
BTW, the DataImportHandler seems t
I believe Norberto ment he was handling it in his update client code --
before sending the docs to Solr.
Something that *seems* possible but I've never actaully tried is writting
a "ConcatTokenFilterFactory" that queues up all the tokens and joins
them together (using some confiured string, de
Hi Norberto,
This sounds exactly what Im looking to do, do you have an example?
(Keep in mind Im using data-config.xml - DataImporter)
Im interested in merging different types of content in, ie:
NEWS12345
VIDEO12345
So Id like to end up w/ different keys per type if possible.
Thanks.
- Jon
On Thu, 6 Mar 2008 11:33:38 -0500
Jon Baer <[EMAIL PROTECTED]> wrote:
> Im interested to know if composite keys are now possible or if there
> is anything to copyField I can use to get composite keys working for
> my doc ids?
FWIW, we just do this @ doc generation time - grab several fields,
Hi,
Im interested to know if composite keys are now possible or if there
is anything to copyField I can use to get composite keys working for
my doc ids?
Thanks.
-snip-
support for composite keys ... either with some explicit change to the
``
declaration or perhaps just copyField with so
19 matches
Mail list logo