Thank you Naga & Sunil . Naga, Would like to know more about the counters ; Are they a cluster wide resource managed at a central location - so they can be tracked/verified later ?!
Please advise Thanks, Rajila On Tue, Jul 25, 2017 at 7:01 PM, Naganarasimha Garla < [email protected]> wrote: > Hi Rajila, > One option you can think of is using custom "counters" and > have a logic to increment them when ever you insert or have any custom > logic. These counters can be got from the MR interfaces and even in the web > ui even after the job has finished. > > Regards, > + Naga > > On Tue, Jul 25, 2017 at 7:12 PM, Sunil Govind <[email protected]> > wrote: > >> Hi Rajila, >> >> From YARN side, you will be able to get detailed information about the >> application. And that application could be MapReduce or anything. But in >> side that mapreduce app, what kind of operation is done, its specific to >> that application (here its mapreduce). >> >> YARN could only be able to give you time/memory/cpu usage w.r.t app or >> atmost at node level. >> >> Thanks >> Sunil >> >> On Tue, Jul 25, 2017 at 3:46 AM rajila2008 . <[email protected]> >> wrote: >> >>> Hi all, >>> >>> Does YARN provide application level info ?! >>> >>> For example : there is a map-reduce job persisting its outcome in a >>> NoSql datastore by executing an INSERT command. >>> Can YARN provide the execution time for the INSERT ? without the >>> application itself is not logging the info anywhere. >>> >>> There's some argument at workplace , Dev team asking prod-support to >>> find such info thru YARN logs. >>> >>> I believe "YARN's resource reporting" is similar to "unix top" command, >>> but at cluster level . "top" give system level info , not how many >>> INSERTs a job executed. >>> Similarly YARN will not give application specific info like the number >>> of INSERT op OR record count OR array size with a job , the application >>> needs to log such info as needed. >>> >>> Could anyone please clarify if my understanding correct ?! >>> >>> Regards, >>> Rajila >>> >> >
