Well, thanks a lot.
Chris Hostetter-3 wrote
> The first question i have is why you are using a version of Solr that's
> almost 5 years old.
*Well, Solr is part of another software and integrated with this version.
With next update they will also update Solr to ver. 7...*
Chris Hostetter-3 wr
: FWIW: I used the script below to build myself 3.8 million documents, with
: 300 "text fields" consisting of anywhere from 1-10 "words" (integers
: between 1 and 200)
Whoops ... forgot to post the script...
#!/usr/bin/perl
use strict;
use warnings;
my $num_docs = 3_800_000;
my $max_words_
: SQL DB 4M documents with up to 5000 metadata fields each document [2xXeon
: 2.1Ghz, 32GB RAM]
: Actual Solr: 1 Core version 4.6, 3.8M documents, schema has 300 metadata
: fields to import, size 3.6GB [2xXeon 2.4Ghz, 32GB RAM]
: (atm we need 35h to build the index and about 24h for a mass update
Are you doing a commit after every document? Is the index on local disk?
That is very slow indexing. With four shards and smaller documents, we can
index about a million documents per minute.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Jul 19, 2
Hi Francois,
If I got your numbers right, you are indexing on a single server and indexing
rate is ~31 doc/s. I would first check if something is wrong with indexing
logic. You check where the bottleneck is: do you read documents from DB fast
enough, do you batch documents…
Assuming you cannot h
Would like to ask what your recommendations are for a new performant Solr
architecture.
SQL DB 4M documents with up to 5000 metadata fields each document [2xXeon
2.1Ghz, 32GB RAM]
Actual Solr: 1 Core version 4.6, 3.8M documents, schema has 300 metadata
fields to import, size 3.6GB [2xXeon 2.4Ghz,