Jason,

Regarding your statement "push you over the edge"- what does that mean?
 Does it mean "uncharted territory with unknown ramifications" or something
more like specific, known symptoms?

I ask because our use is similar to Vinay's in some respects, and we want
to be able to push the capabilities of write perf - but not over the edge!
 In particular, I am interested in knowing the symptoms of failure, to help
us troubleshoot the underlying problems if and when they arise.

Thanks,

Scott

On Monday, June 24, 2013, Jason Hellman wrote:

> Vinay,
>
> You may wish to pay attention to how many transaction logs are being
> created along the way to your hard autoCommit, which should truncate the
> open handles for those files.  I might suggest setting a maxDocs value in
> parallel with your maxTime value (you can use both) to ensure the commit
> occurs at either breakpoint.  30 seconds is plenty of time for 5 parallel
> processes of 20 document submissions to push you over the edge.
>
> Jason
>
> On Jun 24, 2013, at 2:21 PM, Vinay Pothnis <poth...@gmail.com> wrote:
>
> > I have 'softAutoCommit' at 1 second and 'hardAutoCommit' at 30 seconds.
> >
> > On Mon, Jun 24, 2013 at 1:54 PM, Jason Hellman <
> > jhell...@innoventsolutions.com> wrote:
> >
> >> Vinay,
> >>
> >> What autoCommit settings do you have for your indexing process?
> >>
> >> Jason
> >>
> >> On Jun 24, 2013, at 1:28 PM, Vinay Pothnis <poth...@gmail.com> wrote:
> >>
> >>> Here is the ulimit -a output:
> >>>
> >>> core file size           (blocks, -c)  0  data seg size
> >> (kbytes,
> >>> -d)  unlimited  scheduling priority              (-e)  0  file size
> >>>               (blocks, -f)  unlimited  pending signals
> >>> (-i)  179963  max locked memory        (kbytes, -l)  64  max memory
> size
> >>>         (kbytes, -m)  unlimited  open files                       (-n)
> >>> 32769  pipe size             (512 bytes, -p)  8  POSIX message queues
> >>>   (bytes,
> >>> -q)  819200  real-time priority               (-r)  0  stack size
> >>> (kbytes, -s)  10240  cpu time                (seconds, -t)  unlimited
> >> max
> >>> user processes               (-u)  140000  virtual memory
> >> (kbytes,
> >>> -v)  unlimited  file locks                       (-x)  unlimited
> >>>
> >>> On Mon, Jun 24, 2013 at 12:47 PM, Yago Riveiro <yago.rive...@gmail.com
> >>> wrote:
> >>>
> >>>> Hi,
> >>>>
> >>>> I have the same issue too, and the deploy is quasi exact like than
> mine,
> >>>>
> >>
> http://lucene.472066.n3.nabble.com/updating-docs-in-solr-cloud-hangs-td4067388.html#a4067862
> >>>>
> >>>> With some concurrence and batches of 10 solr apparently have some
> >> deadlock
> >>>> distributing updates
> >>>>
> >>>> Can you dump the configuration of the ulimit on your servers?, some
> >> people
> >>>> had the same issues because they are reach the ulimit maximum defined
> >> for
> >>>> descriptor and process.
> >>>>
> >>>> --
> >>>> Yago Riveiro
> >>>> Sent with Sparrow (http://www.sparrowmailapp.com/?sig)
> >>>>
> >>>>
> >>>> On Monday, June 24, 2013 at 7:49 PM, Vinay Pothnis wrote:
> >>>>
> >>>>> Hello All,
> >>>>>
> >>>>> I have the following set up of solr cloud.
> >>>>>
> >>>>> * solr version 4.3.1
> >>>>> * 3 node solr cloud + replciation factor 2
> >>>>> * 3 zoo keepers
> >>>>> * load balancer in front of the 3 solr nodes
> >>>>>
> >>>>> I am seeing this strange behavior when I am indexing a large number
> of
> >>>>> documents (10 mil). When I have more than 3-5 threads sending
> documents
> >>>> (in
> >>>>> batch of 20) to solr, sometimes solr goes into a hung state. After
> this
> >>>> all
> >>>>> the update requests get timed out. What we see via AppDynamics (a
> >>>>> performance monitoring tool) is that there are a number of threads
> that
> >>>> are
> >>>>> stalled. The stack trace for one of the threads is shown below.
> >>>>>
> >>>>> The cluster has to be restarted to recover from this. When I reduce
> the
> >>>>>



-- 
Scott Lundgren
Director of Engineering
Carbon Black, Inc.
(210) 204-0483 | scott.lundg...@carbonblack.com

Reply via email to