Hi
I want to use the DIH component in order to import data from old postgresql
DB.
I want to be able to recover from errors and crashes.
If an error occurs I should be able to restart and continue indexing from
where it stopped.
Is the DIH good enough for my requirements ?
If not is it possible to
Thanks Shawn
In your opinion, what do you think is easier, writing the importer from
scratch or extending the DIH (for example: adding the state etc...)?
Yuval
On Thu, Apr 24, 2014 at 6:47 PM, Shawn Heisey wrote:
> On 4/24/2014 9:24 AM, Yuval Dotan wrote:
>
>> I want to
Hi,
I isolated the case
Installed on a new machine (2 x Xeon E5410 2.33GHz)
I have an environment with 12Gb of memory.
I assigned 6gb of memory to Solr and I’m not running any other memory
consuming process so no memory issues should arise.
Removed all indexes apart from two:
emptyCore – empt
tion. You say that your code has 750k docs and around
> 400mb? Is this some kind of test dataset and you expect it to grow
> significantly? For an index of this size, I wouldn't use distributed
> search, single shard should be fine.
>
>
> Tomás
>
>
> On Sun, No
Hi
Thanks very much for your answers :)
Manuel, if you have a patch I will be glad to test it's performance
Yuval
On Mon, Nov 18, 2013 at 10:49 AM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Manuel, that sounds very interesting. Would you be willing to
> contribute this back to th
you could always try the fc facet method and maybe increase the filtercache
size
On Thu, Nov 22, 2012 at 2:53 PM, Pravin Agrawal <
pravin_agra...@persistent.co.in> wrote:
> Hi All,
>
> We are using solr 3.4 with following schema fields.
>
>
> --
.org/solr/SolrPerformanceFactors#OutOfMemoryErrors
>
> Dan
>
> On Mon, Apr 30, 2012 at 9:43 AM, Yuval Dotan wrote:
>
> > Hi Guys
> > I have a problem and i need your assistance
> > I get an exception when doing field cache faceting (the enum method works
> &g
Hi All
We have an index of ~2,000,000,000 Documents and the query and facet times
are too slow for us.
Before using the shards solution for improving performance, we thought
about using the multicore feature (our goal is to maximize performance for
a single machine).
Most of our queries will be lim
Hi
Can someone please guide me to the right way to partition the solr index?
On Mon, May 7, 2012 at 11:41 AM, Yuval Dotan wrote:
> Hi All
> Jan, thanks for the reply - answers for your questions are located below
> Please update me if you have ideas that can solve my problems.
>
&
ery all of the documents that are NOT in that shard, and
> optimize. We had to do this in sequence so it took a few days :) You
> don't need a full optimize. Use 'maxSegments=50' or '100' to suppress
> that last final giant merge.
>
> On Tue, May 8, 2012 a
-1&f.Severity.facet.sort=index&f.Severity.facet.limit=-1&f.trimTime.facet.sort=index&f.trimTime.facet.limit=-1&facet=true&f.product.facet.method=enum&facet.pivot=product,Severity,trimTime
NumFound: 299
Times(ms):
Qtime: 2,756 Query: 307 Facet: 2,449
On Thu, Sep 20, 2012 a
11 matches
Mail list logo