They are separate cases. In attempt 1 – I was ingesting to only 1 core. Then to 
3 cores and then 5 cores. Yes, they are completely independent cores.

I think I was not reading the ‘iostats’ right. With –x option,  the ‘avgrq-sz’ 
parameter is constantly above 300. From some readings online, I see that 3 
digit number for this parameter is a red flag. I’m trying to run the 
experiments on better disk now. 

Yes, the intent is to max out the cpu to find the maximum load the system can 
handle.

Thanks,
Shashank

On 1/10/18, 4:59 PM, "Erick Erickson" <erickerick...@gmail.com> wrote:

    OK, so I'm assuming your indexer indexes to 1, 3 and 5 separate cores
    depending on how many are available, right? And these cores are essentially
    totally independent.
    
    I'd guess your gating factor is your ingestion process. Try spinning up two
    identical ones from two separate clients. Eventually you should be able to
    max out your CPU as you add cores. The fact that your indexing rate is
    fairly constant at 90K docs/sec is a red flag that that's the rate you're
    feeding docs to Solr.
    
    At some point you'll max out our CPU and that'll be the limit.
    
    Best,
    Erick
    
    On Wed, Jan 10, 2018 at 1:52 PM, Shashank Pedamallu <spedama...@vmware.com>
    wrote:
    
    > - Did you sept up an actual multiple node cluster or are you running this
    > all on one box?
    > Sorry, I should have mentioned this earlier. I’m running Solr in non-cloud
    > mode. It is just a single node Solr.
    >
    > - Are you configuring Jmeter to send with multiple threads?
    > Yes, multiple threads looping a fixed number of times
    >
    > - Are they all sending to the same node, or are you distributing across
    > nodes? Is there a load balancer?
    > Yes, since there is only one node.
    >
    > - If you are sending requests up to the cloud from your local machine,
    > that is frequently a slow link.
    > Not a public cloud. Our private one.
    >
    > - are you sending one document at a time or batching them up?
    > Batching them up. About 1000 documents in one request
    >
    > Thanks,
    > Shashank
    >
    > On 1/10/18, 1:35 PM, "Gus Heck" <gus.h...@gmail.com> wrote:
    >
    >     Ok then here's a few things to check...
    >
    >        - Did you sept up an actual multiple node cluster or are you 
running
    >        this all on one box?
    >        - Are you configuring Jmeter to send with multiple threads?
    >        - Are they all sending to the same node, or are you distributing
    > across
    >        nodes? Is there a load balancer?
    >        - Are you sending from a machine on the same network as the
    > machines in
    >        the Solr cluster?
    >        - If you are sending requests up to the cloud from your local
    > machine,
    >        that is frequently a slow link.
    >        - Also don't forget to check your zookeeper cluster's health... if
    > it's
    >        bogged down that will slow down solr.
    >
    >     If you have all machines on the same network, many threads, load
    > balancing
    >     and no questionable equipment (or networking limitations put in place
    > by
    >     IT) in the middle, then something (either CPU or network interface)
    > should
    >     be maxed out somewhere on at least one machine, either on the Jmeter
    > side
    >     or Solr side.
    >
    >     -Gus
    >
    >     On Wed, Jan 10, 2018 at 3:54 PM, Shashank Pedamallu <
    > spedama...@vmware.com>
    >     wrote:
    >
    >     > Hi Gus,
    >     >
    >     > Thank  for the reply. I’m sending via jmeter running on my local
    > machine
    >     > to Solr running on a remote vm.
    >     >
    >     > Thanks,
    >     > Shashank
    >     >
    >     > On 1/10/18, 12:34 PM, "Gus Heck" <gus.h...@gmail.com> wrote:
    >     >
    >     >     Ingested how? Sounds like your document sending mechanism is
    > maxed,
    >     > not the
    >     >     solr cluster...
    >     >
    >     >     On Wed, Jan 10, 2018 at 2:58 PM, Shashank Pedamallu <
    >     > spedama...@vmware.com>
    >     >     wrote:
    >     >
    >     >     > Hi,
    >     >     >
    >     >     >
    >     >     >
    >     >     > I’m trying to find the upper thresholds of ingestion and I 
have
    >     > tried the
    >     >     > following. In each of the experiments, I’m ingesting random
    >     > documents with
    >     >     > 5 fields.
    >     >     >
    >     >     >
    >     >     > Number of Cores Number of documents ingested per second per
    > core
    >     >     > 1       89000
    >     >     > 3       33000
    >     >     > 5       18000
    >     >     >
    >     >     >
    >     >     > As you can see, the number of documents being ingested per
    > core is
    >     > not
    >     >     > scaling horizontally as I'm adding more cores. Rather the 
total
    >     > number of
    >     >     > documents getting ingested for Solr JVM is being topped around
    > 90k
    >     >     > documents per second.
    >     >     >
    >     >     >
    >     >     > From the iostats and top commands, I do not see any
    > bottlenecks with
    >     > the
    >     >     > iops or cpu respectively, CPU usaeg is around 65% and a sample
    > of
    >     > iostats
    >     >     > is below:
    >     >     >
    >     >     > avg-cpu:  %user   %nice %system %iowait  %steal   %idle
    >     >     >
    >     >     >           55.32    0.00    2.33    1.64    0.00   40.71
    >     >     >
    >     >     >
    >     >     > Device:            tps    kB_read/s    kB_wrtn/s    kB_read
    >     > kB_wrtn
    >     >     >
    >     >     > sda5           2523.00     45812.00    298312.00      45812
    >     >  298312
    >     >     >
    >     >     >
    >     >     > Can someone please guide me as to how I can debug this further
    > and
    >     >     > root-cause the bottleneck for not being able to increase the
    >     > ingestion
    >     >     > horizontally.
    >     >     >
    >     >     >
    >     >     > Thanks,
    >     >     >
    >     >     > Shashank
    >     >     >
    >     >
    >     >
    >     >
    >     >     --
    >     >     https://urldefense.proofpoint.com/v2/url?u=http-3A__www.
    >     > the111shift.com&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=
    >     > blJD2pBapH3dDkoajIf9mT9SSbbs19wRbChNde1ErNI&m=DT_
    >     > 33Z3k4h8T1t65CuyH0oMxay15ddkfDYAQefzgpa4&s=6-1wd3YPVRgcvlk3LkK7Wz-
    >     > 3hDFliEGwVGc44HJH1x4&e=
    >     >
    >     >
    >     >
    >
    >
    >     --
    >     https://urldefense.proofpoint.com/v2/url?u=http-3A__www.
    > the111shift.com&d=DwIFaQ&c=uilaK90D4TOVoH58JNXRgQ&r=
    > blJD2pBapH3dDkoajIf9mT9SSbbs19wRbChNde1ErNI&m=pbia4eQUWz4n0Xt_
    > yX7Qwpe78uY4BponCK3oC3Hw0lE&s=DM4yi3jj900fXF2lcbx7YqLurs4n-
    > fbQaD7JZYUfym8&e=
    >
    >
    >
    >
    >
    

Reply via email to