Hi,
Is this already registered as bug.Is their any fix to this issue(i want use
EmbeddedSolrServer server only).
Regards,
Ram.
--
View this message in context:
http://lucene.472066.n3.nabble.com/1-3-help-with-update-timeout-issue-tp505766p3310048.html
Sent from the Solr - User mailing list arch
Turn off all autocommitting..
On 3/14/11 7:04 AM, "lame" wrote:
>Hi guys,
>I have master slave replication enabled. Slave is replicating every 3
>minutes and I encourage problems while I'm performing full import
>command on master (which takes about 7 minutes).
>Slave repliacates partial index a
Yes, commits from the application will interfere indeed. If your business
scenario allows for using always optimized indices you might choose to only
replicate on optimize.
On Monday 14 March 2011 18:45:15 lame wrote:
> We have also commits from application (besides full import) - maybe
> that i
We have also commits from application (besides full import) - maybe
that is the case.
If you don't have any other ideas I'll probably try reindexing second
core, than swap cores and run delta import (to import documets added
in the meantime).
2011/3/14 Markus Jelsma :
> These settings don't affect
These settings don't affect a commit. But, the maxPendingDeletes might but i'm
unsure. If you commit on the master and slaves are configured to replicate on
commit, it all should have the same index version.
On Monday 14 March 2011 14:42:51 lame wrote:
> It looks like (we don't have autocommit s
It looks like (we don't have autocommit section in
solr.DirectUpdateHandler2, is ramBufferSizeMB is responsible for
that?):
false
10
320
2147483647
1
1000
1
single
false
320
10
2147483647
1
false
10
But
In solrconfig there might be a autocommit section enabled.
On Monday 14 March 2011 14:18:42 lame wrote:
> I don't commit at all we use Dataimporter, but I have a feeling that
> it could be done by DIH (autocommit is it possible)?
>
> 2011/3/14 Markus Jelsma :
> > Do you commit to often? Slaves w
I don't commit at all we use Dataimporter, but I have a feeling that
it could be done by DIH (autocommit is it possible)?
2011/3/14 Markus Jelsma :
> Do you commit to often? Slaves won't replicate if while master is indexing if
> you don't send commits. Can you only commit once the indexing finis
Do you commit to often? Slaves won't replicate if while master is indexing if
you don't send commits. Can you only commit once the indexing finishes?
On Monday 14 March 2011 14:04:51 lame wrote:
> Hi guys,
> I have master slave replication enabled. Slave is replicating every 3
> minutes and I enc
Hi guys,
I have master slave replication enabled. Slave is replicating every 3
minutes and I encourage problems while I'm performing full import
command on master (which takes about 7 minutes).
Slave repliacates partial index about 200k documents out of 700k.
After next repliacation full index is r
On 12/14/2010 9:13 AM, Tim Heckman wrote:
Once per day in the morning, I run a full index + optimize into an "on
deck" core. When this is complete, I swap the "on deck" with the live
core. A side-effect of this is that the version number / generation of
the live index just went backwards, since t
On Tue, Dec 14, 2010 at 10:37 AM, Shawn Heisey wrote:
> It's supposed to take care of removing the old indexes on its own - when
> everything is working, it builds an index. directory, replicates,
> swaps that directory in to replace index, and deletes the directory with the
> timestamp. I have n
On 12/14/2010 8:31 AM, Tim Heckman wrote:
When using the index replication over HTTP that was introduced in Solr
1.4, what is the recommended way to periodically clean up old indexes
on the slaves?
I found references to the snapcleaner script, but that seems to be for
the older ssh/rsync replica
When using the index replication over HTTP that was introduced in Solr
1.4, what is the recommended way to periodically clean up old indexes
on the slaves?
I found references to the snapcleaner script, but that seems to be for
the older ssh/rsync replication model.
thanks,
Tim
Is the cleanup of indexes using Solr 1.4 Replication documented
somewhere? I can't find any information regarding this at:
http://wiki.apache.org/solr/SolrReplication
Too many snapshot indexes are being left around, and so they need to
be cleaned up.
ile a bug?
Thanks,
Osborn
-Original Message-
From: Osborn Chan [mailto:oc...@shutterfly.com]
Sent: Friday, January 15, 2010 12:35 PM
To: solr-user@lucene.apache.org
Subject: RE: Index Courruption after replication by new Solr 1.4 Replication
Hi Otis,
Thanks. There is no NFS anymore, a
the index.20100127044500/ is a temp directory should have got cleaned
up if there was no problem in replication (see the logs if there was a
problem) . if there is a problem the temp directory will be used as
the new index directory and the old one will no more be used.at any
given point only one d
Thanks, Otis. Responses inline.
Hi,
We're using the new replication and it's working pretty well.
There's one detail
I'd like to get some more information about.
As the replication works, it creates versions of the index in the
data
directory. Originally we had index/, but now there are
Answers below.
- Original Message
> From: mark angelillo
>
> Hi,
>
> We're using the new replication and it's working pretty well. There's one
> detail
> I'd like to get some more information about.
>
> As the replication works, it creates versions of the index in the data
> dir
Hi,
We're using the new replication and it's working pretty well. There's
one detail I'd like to get some more information about.
As the replication works, it creates versions of the index in the data
directory. Originally we had index/, but now there are dated versions
such as index.2010
: Subject: Index Courruption after replication by new Solr 1.4 Replication
: References: <3ca90cc651ae3f4baedf8a5b78639c8c038a1...@mail02.tveyes.com>
: <667725.5147...@web52905.mail.re2.yahoo.com>
: <3ca90cc651ae3f4baedf8a5b78639c8c038a1...@ma
, 2010 12:31 PM
To: solr-user@lucene.apache.org
Subject: Re: Index Courruption after replication by new Solr 1.4 Replication
This is not a direct answer to your question, but can you avoid NFS? My first
guess would be that NFS somehow causes this problem. If you check the ML
archives for: NFS
> From: Osborn Chan
> To: "solr-user@lucene.apache.org"
> Sent: Fri, January 15, 2010 3:23:21 PM
> Subject: Index Courruption after replication by new Solr 1.4 Replication
>
> Hi all,
>
> I have migrated new Solr 1.4 Replication feature with multicore
Hi all,
I have migrated new Solr 1.4 Replication feature with multicore support from
Solr 1.2 with NFS mounting recently. The following exceptions are in
catalina.log from time to time, and there are some EOF exceptions which I
believe the slave index files are corrupted after replication from
: > This is a relatively safe assumption in most cases, but one that couples the
: > master update policy with the performance of the slaves - if the master gets
: > updated (and committed to) frequently, slaves might face a commit on every
: > 1-2 poll's, much more than is feasible given new sear
e snapinstall on each at the same time (+- epsilon
>>>>>>>> seconds),
>>>>>>>> so that way production load balanced query serving will always be
>>>>>>>> consistent.
>>>&g
ms that i have no control over syncing them,
>>>>>>> but
>>>>>>> rather it polls every few minutes and then decides the next cycle based
>>>>>>> on
>>>>>>> last time it *finished* updating, so in any c
On Fri, Aug 14, 2009 at 11:53 AM, Jibo John wrote:
> Slightly off topic one question on the index file transfer mechanism
> used in the new 1.4 Replication scheme.
> Is my understanding correct that the transfer is over http? (vs. rsync in
> the script-based snappuller)
Yes, th
Slightly off topic one question on the index file transfer
mechanism used in the new 1.4 Replication scheme.
Is my understanding correct that the transfer is over http? (vs.
rsync in the script-based snappuller)
Thanks,
-Jibo
On Aug 14, 2009, at 6:34 AM, Yonik Seeley wrote:
Longer
t;>
>>>>> That is true. How did you synchronize them with the script based
>>>>> solution?
>>>>> Assuming network bandwidth is equally distributed and all slaves are
>>>>> equal
>>>>> in hardware/configuration, the
between new searcher
>>>> registration on any slave should not be more then pollInterval, no?
>>>>
>>>>
>>>>>
>>>>> Also, I noticed the default poll interval is 60 seconds. It would seem
>>>>> that
>>>>&
issue,
>>>> however
>>>> i
>>>> am not clear how this works vis-a-vis the new searcher warmup? for a
>>>> considerable index size (20Million docs+) the warmup itself is an
>>>> expensive
>>>> and somewhat lengthy process and if
sive
>>> and somewhat lengthy process and if a new searcher opens and warms up
>>> every
>>> minute, I am not at all sure i'll be able to serve queries with
>>> reasonable
>>> QTimes.
>>>
&g
not mean that a new index is
> fetched every 60 seconds. A new index is downloaded and installed on the
> slave only if a commit happened on the master (i.e. the index was actually
> changed on the master).
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
>
--
View this message in context:
http://www.nabble.com/Solr-1.4-Replication-scheme-tp24965590p24968105.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Fri, Aug 14, 2009 at 8:39 AM, KaktuChakarabati wrote:
>
> In the old replication, I could snappull with multiple slaves
> asynchronously
> but perform the snapinstall on each at the same time (+- epsilon seconds),
> so that way production load balanced query serving will always be
> consistent.
ppreciated!
Thanks,
-Chak
--
View this message in context:
http://www.nabble.com/Solr-1.4-Replication-scheme-tp24965590p24965590.html
Sent from the Solr - User mailing list archive at Nabble.com.
hit. We
have it set to both optimize and commit.
commit
optimize
--
Jeff Newburn
Software Engineer, Zappos.com
jnewb...@zappos.com - 702-943-7562
> From: Gurjot Singh
> Reply-To:
> Date: Wed, 15 Jul 2009 17:04:58 -0400
> To:
> Subject: Quest
Hi,
I am using data import handler to do full and delta import. I want to use
the replication feature of solr 1.4
For that I wanted to understand 2 scenarios
1. What happens when the slave solr server tries to poll the master at the
time delta import is running on master. Does the slave only copy
Bug filed. Thankyou.
On Wed, 2009-05-27 at 22:40 +0530, Shalin Shekhar Mangar wrote:
> On Wed, May 27, 2009 at 9:01 PM, Matthew Gregg wrote:
>
> > That is disappointing then. Restricting by IP may be doable, but much
> > more work than basic auth.
> >
> >
> The beauty of open source is that this
On Wed, May 27, 2009 at 9:01 PM, Matthew Gregg wrote:
> That is disappointing then. Restricting by IP may be doable, but much
> more work than basic auth.
>
>
The beauty of open source is that this can be changed :)
Please open an issue, we can have basic http authentication made
configurable.
That is disappointing then. Restricting by IP may be doable, but much
more work than basic auth.
On Wed, 2009-05-27 at 20:41 +0530, Noble Paul നോബിള് नोब्ळ् wrote:
> replication has no builtin security
>
>
>
> On Wed, May 27, 2009 at 8:37 PM, Matthew Gregg
> wrote:
> > I would like the to p
replication has no builtin security
On Wed, May 27, 2009 at 8:37 PM, Matthew Gregg wrote:
> I would like the to protect both reads and writes. Reads could have a
> significant impact. I guess the answer is no, replication has no built
> in security?
>
> On Wed, 2009-05-27 at 20:11 +0530, Noble
I would like the to protect both reads and writes. Reads could have a
significant impact. I guess the answer is no, replication has no built
in security?
On Wed, 2009-05-27 at 20:11 +0530, Noble Paul നോബിള് नोब्ळ् wrote:
> The question is what all do you wish to protect.
> There are 'read' as we
The question is what all do you wish to protect.
There are 'read' as well as 'write' attributes .
The reads are the ones which will not cause any harm other than
consuming some cpu cycles.
The writes are the ones which can change the state of the system.
The slave uses the 'read' API's which i f
I've not figured out a way to use basic auth with replication. We
ended up using IP based auth, it shouldn't be too tricky to add
basicauth support as, IIRC, the replication is based on the commons
httpclient library.
On 27 May 2009, at 15:17, Matthew Gregg wrote:
On Wed, 2009-05-27 at 19
On Wed, 2009-05-27 at 19:06 +0530, Noble Paul നോബിള് नोब्ळ् wrote:
> On Wed, May 27, 2009 at 6:48 PM, Matthew Gregg
> wrote:
> > Does replication in 1.4 support passing credentials/basic auth? If not
> > what is the best option to protect replication?
> do you mean protecting the url /replicati
On Wed, May 27, 2009 at 6:48 PM, Matthew Gregg wrote:
> Does replication in 1.4 support passing credentials/basic auth? If not
> what is the best option to protect replication?
do you mean protecting the url /replication ?
ideally Solr is expected to run in an unprotected environment. if you
wis
Does replication in 1.4 support passing credentials/basic auth? If not
what is the best option to protect replication?
48 matches
Mail list logo