Thank you for the reply Kevin. I was using 6 vms from our private cloud. 5 
among them, I was using as clients to ingest data on 5 independent cores. One 
vm is hosting the Solr which is where all ingest requests are received for all 
cores. Since they are all on same network, I think they should not be limited 
by the network bandwidth for the amount of requests I’m sending.

Thanks,
Shashank

On 1/11/18, 10:21 AM, "Kevin Risden" <kris...@apache.org> wrote:

    When you say "multiple machines", was these all local machines or vms or
    something else? I worked with a group once that used laptops to benchmark a
    service and it was a WiFi network limit that caused weird results. LAN
    connections or even better a dedicated client machine would help push more
    documents.
    
    Kevin Risden
    
    On Thu, Jan 11, 2018 at 11:39 AM, Shashank Pedamallu <spedama...@vmware.com>
    wrote:
    
    > Thank you very much for the reply Shawn. Is the jmeter running on a
    > different machine from Solr or on the same machine?
    > Solr is running on a dedicated VM. And I’ve tried to split the client
    > requests from multiple machines but the result was not different. So, I
    > don’t think the bottleneck is with the client side.
    >
    > Thanks,
    > Shashank
    >
    >
    > On 1/10/18, 10:54 PM, "Shawn Heisey" <apa...@elyograg.org> wrote:
    >
    >     On 1/10/2018 12:58 PM, Shashank Pedamallu wrote:
    >     > As you can see, the number of documents being ingested per core is
    > not scaling horizontally as I'm adding more cores. Rather the total number
    > of documents getting ingested for Solr JVM is being topped around 90k
    > documents per second.
    >
    >     I would call 90K documents per second a very respectable speed.  I
    > can't
    >     get my indexing to happen at anywhere near that rate.  My indexing is
    >     not multi-threaded, though.
    >
    >     >  From the iostats and top commands, I do not see any bottlenecks
    > with the iops or cpu respectively, CPU usaeg is around 65% and a sample of
    > iostats is below:
    >     >
    >     > avg-cpu:  %user   %nice %system %iowait  %steal   %idle
    >     >
    >     >            55.32    0.00    2.33    1.64    0.00   40.71
    >     >
    >     > Device:            tps    kB_read/s    kB_wrtn/s    kB_read
    > kB_wrtn
    >     >
    >     > sda5           2523.00     45812.00    298312.00      45812
    >  298312
    >
    >     Nearly 300 megabytes per second write speed?  That's a LOT of data.
    >     This storage must be quite a bit better than a single spinning disk.
    >     You won't get that kind of sustained transfer speed out of standard
    >     spinning disks unless they are using something like RAID10 or RAID0.
    >     This transfer speed is also well beyond the capabilities of Gigabit
    >     Ethernet.
    >
    >     When Gus asked whether you were sending documents to the cloud from
    > your
    >     local machine, I don't think he was referring to a public cloud.  I
    >     think he assumed you were running SolrCloud, so "cloud" was probably
    >     referring to your Solr installation, not a public cloud service.  If I
    >     had to guess, I think the intent was to find out what caliber of
    > machine
    >     you're using to send the indexing requests.
    >
    >     I don't know if the bottleneck is on the client side or the server
    > side.
    >       But I would imagine that with everything on a single machine, you 
may
    >     not be able to get the ingestion rate to go much higher.
    >
    >     Is the jmeter running on a different machine from Solr or on the same
    >     machine?
    >
    >     Thanks,
    >     Shawn
    >
    >
    >
    

Reply via email to